diff --git a/README.md b/README.md index 54b968b..c57ac7f 100644 --- a/README.md +++ b/README.md @@ -31,24 +31,24 @@ ServiceRadar can be installed via direct downloads from GitHub releases. Install these components on your monitored host: ```bash # Download and install core components -curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-agent_1.0.10.deb \ - -O https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-poller_1.0.10.deb +curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-agent_1.0.11.deb \ + -O https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-poller_1.0.11.deb -sudo dpkg -i serviceradar-agent_1.0.10.deb serviceradar-poller_1.0.10.deb +sudo dpkg -i serviceradar-agent_1.0.11.deb serviceradar-poller_1.0.11.deb ``` On a separate machine (recommended) or the same host: ```bash # Download and install cloud service -curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-cloud_1.0.10.deb -sudo dpkg -i serviceradar-cloud_1.0.10.deb +curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-cloud_1.0.11.deb +sudo dpkg -i serviceradar-cloud_1.0.11.deb ``` #### Optional: Dusk Node Monitoring If you're running a [Dusk](https://dusk.network/) node and want specialized monitoring: ```bash -curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.10/serviceradar-dusk-checker_1.0.10.deb -sudo dpkg -i serviceradar-dusk-checker_1.0.10.deb +curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.11/serviceradar-dusk-checker_1.0.11.deb +sudo dpkg -i serviceradar-dusk-checker_1.0.11.deb ``` #### Distributed Setup @@ -56,20 +56,20 @@ For larger deployments where components run on different hosts: 1. On monitored hosts: ```bash -curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.10/serviceradar-agent_1.0.10.deb -sudo dpkg -i serviceradar-agent_1.0.10.deb +curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.11/serviceradar-agent_1.0.11.deb +sudo dpkg -i serviceradar-agent_1.0.11.deb ``` 2. On monitoring host: ```bash -curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-poller_1.0.10.deb -sudo dpkg -i serviceradar-poller_1.0.10.deb +curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-poller_1.0.11.deb +sudo dpkg -i serviceradar-poller_1.0.11.deb ``` 3. On cloud host: ```bash -curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-cloud_1.0.10.deb -sudo dpkg -i serviceradar-cloud_1.0.10.deb +curl -LO https://github.com/mfreeman451/serviceradar/releases/download/1.0.3/serviceradar-cloud_1.0.11.deb +sudo dpkg -i serviceradar-cloud_1.0.11.deb ``` ## Architecture @@ -171,19 +171,19 @@ cd serviceradar 1. **Agent Installation** (on monitored hosts): ```bash -sudo dpkg -i serviceradar-dusk-checker_1.0.10.deb # For Dusk nodes +sudo dpkg -i serviceradar-dusk-checker_1.0.11.deb # For Dusk nodes # or -sudo dpkg -i serviceradar-agent_1.0.10.deb # For other hosts +sudo dpkg -i serviceradar-agent_1.0.11.deb # For other hosts ``` 2. **Poller Installation** (on any host in your network): ```bash -sudo dpkg -i serviceradar-poller_1.0.10.deb +sudo dpkg -i serviceradar-poller_1.0.11.deb ``` 3. **Cloud Installation** (on a reliable host): ```bash -sudo dpkg -i serviceradar-cloud_1.0.10.deb +sudo dpkg -i serviceradar-cloud_1.0.11.deb ``` ## Configuration diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 0000000..7e72819 --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,214 @@ +# Security Configuration + +ServiceRadar supports multiple security modes for gRPC communication between components. Choose the mode that best fits your environment and security requirements. + +## Quick Start + +The simplest secure configuration uses basic TLS: + +```json +{ + "security": { + "mode": "tls", + "cert_dir": "/etc/serviceradar/certs" + } +} +``` + +## Security Modes + +### Development Mode (No Security) +⚠️ **Not recommended for production use** + +```json +{ + "security": { + "mode": "none" + } +} +``` + +### Basic TLS +Provides encryption and server authentication: + +```json +{ + "security": { + "mode": "tls", + "cert_dir": "/etc/serviceradar/certs" + } +} +``` + +Required files in cert_dir: +- `ca.crt`: Certificate Authority certificate +- `server.crt`: Server certificate +- `server.key`: Server private key + +### Mutual TLS (mTLS) +Provides encryption with both server and client authentication: + +```json +{ + "security": { + "mode": "mtls", + "cert_dir": "/etc/serviceradar/certs" + } +} +``` + +Required files in cert_dir: +- `ca.crt`: Certificate Authority certificate +- `server.crt`: Server certificate +- `server.key`: Server private key +- `client.crt`: Client certificate +- `client.key`: Client private key + +### SPIFFE/SPIRE Integration +Zero-trust workload identity using SPIFFE: + +```json +{ + "security": { + "mode": "spiffe", + "trust_domain": "example.org", + "workload_socket": "unix:/run/spire/sockets/agent.sock" + } +} +``` + +## Kubernetes Deployment + +### With SPIFFE/SPIRE + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: serviceradar +spec: + containers: + - name: serviceradar + image: serviceradar:latest + env: + - name: SR_SECURITY_MODE + value: "spiffe" + - name: SR_TRUST_DOMAIN + value: "example.org" + volumeMounts: + - name: spire-socket + mountPath: /run/spire/sockets + readOnly: true + volumes: + - name: spire-socket + hostPath: + path: /run/spire/sockets + type: Directory +``` + +### With mTLS + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: serviceradar +spec: + containers: + - name: serviceradar + image: serviceradar:latest + env: + - name: SR_SECURITY_MODE + value: "mtls" + - name: SR_CERT_DIR + value: "/etc/serviceradar/certs" + volumeMounts: + - name: certs + mountPath: /etc/serviceradar/certs + readOnly: true + volumes: + - name: certs + secret: + secretName: serviceradar-certs +``` + +## Certificate Management + +### Generating Self-Signed Certificates + +For testing or development environments, you can generate self-signed certificates using the provided tool: + +```bash +# Generate basic TLS certificates +serviceradar cert generate --dir /etc/serviceradar/certs + +# Generate mTLS certificates (includes client certs) +serviceradar cert generate --dir /etc/serviceradar/certs --mtls + +# View certificate information +serviceradar cert info --dir /etc/serviceradar/certs +``` + +### Using Existing PKI + +If you have an existing PKI infrastructure, place your certificates in the configured certificate directory: + +```bash +# Example directory structure +/etc/serviceradar/certs/ +├── ca.crt +├── server.crt +├── server.key +├── client.crt # Only needed for mTLS +└── client.key # Only needed for mTLS +``` + +### Certificate Rotation + +ServiceRadar automatically detects and reloads certificates when they change. For SPIFFE mode, certificate rotation is handled automatically by the SPIFFE Workload API. + +## Environment Variables + +All security settings can be configured via environment variables: + +```bash +# Security mode +export SR_SECURITY_MODE=mtls + +# Certificate directory for TLS/mTLS modes +export SR_CERT_DIR=/etc/serviceradar/certs + +# SPIFFE configuration +export SR_TRUST_DOMAIN=example.org +export SR_WORKLOAD_SOCKET=unix:/run/spire/sockets/agent.sock +``` + +## Security Best Practices + +1. Always use a secure mode in production environments +2. Regularly rotate certificates +3. Use mTLS or SPIFFE for zero-trust environments +4. Keep private keys protected (0600 permissions) +5. Monitor certificate expiration +6. Use separate certificates for different components + +## Troubleshooting + +Common issues and solutions: + +1. **Certificate not found errors** + - Verify certificate paths + - Check file permissions + - Ensure certificates are in PEM format + +2. **SPIFFE Workload API connection issues** + - Check SPIFFE agent is running + - Verify socket path and permissions + - Confirm trust domain configuration + +3. **mTLS authentication failures** + - Verify client and server certificates are signed by the same CA + - Check certificate expiration dates + - Confirm trust domain matches (SPIFFE mode) + +For more detailed security configuration and best practices, see the [full documentation](https://docs.serviceradar.example.com/security). \ No newline at end of file diff --git a/buildAll.sh b/buildAll.sh index f9233dc..8272ba9 100755 --- a/buildAll.sh +++ b/buildAll.sh @@ -1,6 +1,6 @@ #!/bin/bash -VERSION=${VERSION:-1.0.10} +VERSION=${VERSION:-1.0.11} ./setup-deb-poller.sh diff --git a/go.mod b/go.mod index 4c471e4..780d5d4 100644 --- a/go.mod +++ b/go.mod @@ -6,6 +6,7 @@ require ( github.com/gorilla/mux v1.8.1 github.com/gorilla/websocket v1.5.3 github.com/mattn/go-sqlite3 v1.14.24 + github.com/spiffe/go-spiffe/v2 v2.4.0 github.com/stretchr/testify v1.9.0 go.uber.org/mock v0.5.0 golang.org/x/net v0.34.0 @@ -14,8 +15,13 @@ require ( ) require ( + github.com/Microsoft/go-winio v0.6.2 // indirect github.com/davecgh/go-spew v1.1.1 // indirect + github.com/go-jose/go-jose/v4 v4.0.4 // indirect + github.com/kr/text v0.2.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect + github.com/zeebo/errs v1.4.0 // indirect + golang.org/x/crypto v0.32.0 // indirect golang.org/x/sys v0.29.0 // indirect golang.org/x/text v0.21.0 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20250127172529-29210b9bc287 // indirect diff --git a/go.sum b/go.sum index d59537f..0144e2d 100644 --- a/go.sum +++ b/go.sum @@ -1,5 +1,10 @@ +github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY= +github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU= +github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= +github.com/go-jose/go-jose/v4 v4.0.4 h1:VsjPI33J0SB9vQM6PLmNjoHqMQNGPiZ0rHL7Ni7Q6/E= +github.com/go-jose/go-jose/v4 v4.0.4/go.mod h1:NKb5HO1EZccyMpiZNbdUw/14tiXNyUJh188dfnMCAfc= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= @@ -14,12 +19,20 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY= github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg= github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= +github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI= +github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= +github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= +github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/mattn/go-sqlite3 v1.14.24 h1:tpSp2G2KyMnnQu99ngJ47EIkWVmliIizyZBfPrBWDRM= github.com/mattn/go-sqlite3 v1.14.24/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/spiffe/go-spiffe/v2 v2.4.0 h1:j/FynG7hi2azrBG5cvjRcnQ4sux/VNj8FAVc99Fl66c= +github.com/spiffe/go-spiffe/v2 v2.4.0/go.mod h1:m5qJ1hGzjxjtrkGHZupoXHo/FDWwCB1MdSyBzfHugx0= github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= +github.com/zeebo/errs v1.4.0 h1:XNdoD/RRMKP7HD0UhJnIzUy74ISdGGxURlYG8HSWSfM= +github.com/zeebo/errs v1.4.0/go.mod h1:sgbWHsvVuTPHcqJJGQ1WhI5KbWlHYz+2+2C/LSEtCw4= go.opentelemetry.io/otel v1.32.0 h1:WnBN+Xjcteh0zdk01SVqV55d/m62NJLJdIyb4y/WO5U= go.opentelemetry.io/otel v1.32.0/go.mod h1:00DCVSB0RQcnzlwyTfqtxSm+DRr9hpYrHjNGiBHVQIg= go.opentelemetry.io/otel/metric v1.32.0 h1:xV2umtmNcThh2/a/aCP+h64Xx5wsj8qqnkYZktzNa0M= @@ -32,6 +45,8 @@ go.opentelemetry.io/otel/trace v1.32.0 h1:WIC9mYrXf8TmY/EXuULKc8hR17vE+Hjv2cssQD go.opentelemetry.io/otel/trace v1.32.0/go.mod h1:+i4rkvCraA+tG6AzwloGaCtkx53Fa+L+V8e9a7YvhT8= go.uber.org/mock v0.5.0 h1:KAMbZvZPyBPWgD14IrIQ38QCyjwpvVVV6K/bHl1IwQU= go.uber.org/mock v0.5.0/go.mod h1:ge71pBPLYDk7QIi1LupWxdAykm7KIEFchiOqd6z7qMM= +golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc= +golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc= golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= @@ -44,7 +59,8 @@ google.golang.org/grpc v1.70.0 h1:pWFv03aZoHzlRKHWicjsZytKAiYCtNS0dHbXnIdq7jQ= google.golang.org/grpc v1.70.0/go.mod h1:ofIJqVKDXx/JiXrwr2IG4/zwdH9txy3IlF40RmcJSQw= google.golang.org/protobuf v1.36.4 h1:6A3ZDJHn/eNqc1i+IdefRzy/9PokBTPvcqMySR7NNIM= google.golang.org/protobuf v1.36.4/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= -gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= +gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/pkg/agent/sweep_service.go b/pkg/agent/sweep_service.go index 40c0e06..71fa3b2 100644 --- a/pkg/agent/sweep_service.go +++ b/pkg/agent/sweep_service.go @@ -57,7 +57,7 @@ func NewSweepService(config *models.Config) (Service, error) { ) // Create processor instance - processor := sweeper.NewBaseProcessor() + processor := sweeper.NewBaseProcessor(config) // Create an in-memory store store := sweeper.NewInMemoryStore(processor) @@ -467,9 +467,6 @@ func (s *SweepService) GetStatus(ctx context.Context) (*proto.StatusResponse, er return nil, fmt.Errorf("failed to marshal sweep status: %w", err) } - // Log the response data for debugging - log.Printf("Sweep status response: %s", string(statusJSON)) - return &proto.StatusResponse{ Available: true, Message: string(statusJSON), diff --git a/pkg/cloud/server.go b/pkg/cloud/server.go index debdbd4..2dc7569 100644 --- a/pkg/cloud/server.go +++ b/pkg/cloud/server.go @@ -252,7 +252,7 @@ func (s *Server) sendStartupNotification(ctx context.Context) error { Timestamp: time.Now().UTC().Format(time.RFC3339), NodeID: "cloud", Details: map[string]any{ - "version": "1.0.10", + "version": "1.0.11", "hostname": getHostname(), }, } diff --git a/pkg/grpc/cert_manager.go b/pkg/grpc/cert_manager.go new file mode 100644 index 0000000..610aeee --- /dev/null +++ b/pkg/grpc/cert_manager.go @@ -0,0 +1,95 @@ +package grpc + +import ( + "fmt" + "os" + "path/filepath" + "strings" +) + +const ( + certManagerPerms = 0700 +) + +var ( + errMissingCerts = fmt.Errorf("missing certificates") +) + +// CertificateManager helps manage TLS certificates. +type CertificateManager struct { + config *SecurityConfig +} + +func NewCertificateManager(config *SecurityConfig) *CertificateManager { + return &CertificateManager{config: config} +} + +func (cm *CertificateManager) EnsureCertificateDirectory() error { + return os.MkdirAll(cm.config.CertDir, certManagerPerms) +} + +func (cm *CertificateManager) ValidateCertificates(mutual bool) error { + required := []string{"ca.crt", "server.crt", "server.key"} + if mutual { + required = append(required, "client.crt", "client.key") + } + + var missing []string + + for _, file := range required { + path := filepath.Join(cm.config.CertDir, file) + + if _, err := os.Stat(path); os.IsNotExist(err) { + missing = append(missing, file) + } + } + + if len(missing) > 0 { + return fmt.Errorf("%w %s", errMissingCerts, strings.Join(missing, ", ")) + } + + return nil +} + +// Example usage: +/* +type ServerConfig struct { + Security *SecurityConfig + // ... other config fields +} + +func NewServer(config *ServerConfig) (*Server, error) { + provider, err := NewSecurityProvider(config.Security) + if err != nil { + return nil, fmt.Errorf("failed to create security provider: %w", err) + } + + creds, err := provider.GetServerCredentials(context.Background()) + if err != nil { + return nil, fmt.Errorf("failed to get server credentials: %w", err) + } + + server := grpc.NewServer(creds) + // ... rest of server setup + + return &Server{ + provider: provider, + server: server, + }, nil +} + +type Server struct { + provider SecurityProvider + server *grpc.Server +} + +func (s *Server) Stop() { + if s.server != nil { + s.server.GracefulStop() + } + if s.provider != nil { + _ = s.provider.Close() + } +} + +*/ diff --git a/pkg/grpc/generate_certs_test.go b/pkg/grpc/generate_certs_test.go new file mode 100644 index 0000000..dea3614 --- /dev/null +++ b/pkg/grpc/generate_certs_test.go @@ -0,0 +1,124 @@ +package grpc + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "math/big" + "os" + "path/filepath" + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +// generateTestCertificates creates a CA, server, and client certificates for testing. +func generateTestCertificates(t *testing.T, dir string) { + t.Helper() + + // Generate CA key and certificate + caKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + require.NoError(t, err, "Failed to generate CA key") + + caTemplate := &x509.Certificate{ + SerialNumber: big.NewInt(1), + Subject: pkix.Name{ + Organization: []string{"Test CA"}, + }, + NotBefore: time.Now(), + NotAfter: time.Now().Add(24 * time.Hour), + IsCA: true, + KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageDigitalSignature, + BasicConstraintsValid: true, + } + + // Self-sign the CA certificate + caCertDER, err := x509.CreateCertificate(rand.Reader, caTemplate, caTemplate, &caKey.PublicKey, caKey) + require.NoError(t, err, "Failed to create CA certificate") + + // Save CA certificate + savePEMCertificate(t, filepath.Join(dir, "ca.crt"), caCertDER) + savePEMPrivateKey(t, filepath.Join(dir, "ca.key"), caKey) + + // Generate server certificate + serverKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + require.NoError(t, err, "Failed to generate server key") + + serverTemplate := &x509.Certificate{ + SerialNumber: big.NewInt(2), + Subject: pkix.Name{ + Organization: []string{"Test Server"}, + }, + NotBefore: time.Now(), + NotAfter: time.Now().Add(24 * time.Hour), + KeyUsage: x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth}, + DNSNames: []string{"localhost"}, + } + + serverCertDER, err := x509.CreateCertificate(rand.Reader, serverTemplate, caTemplate, &serverKey.PublicKey, caKey) + require.NoError(t, err, "Failed to create server certificate") + + // Save server certificate and key + savePEMCertificate(t, filepath.Join(dir, "server.crt"), serverCertDER) + savePEMPrivateKey(t, filepath.Join(dir, "server.key"), serverKey) + + // Generate client certificate for mTLS + clientKey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + require.NoError(t, err, "Failed to generate client key") + + clientTemplate := &x509.Certificate{ + SerialNumber: big.NewInt(3), + Subject: pkix.Name{ + Organization: []string{"Test Client"}, + }, + NotBefore: time.Now(), + NotAfter: time.Now().Add(24 * time.Hour), + KeyUsage: x509.KeyUsageDigitalSignature, + ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth}, + } + + clientCertDER, err := x509.CreateCertificate(rand.Reader, clientTemplate, caTemplate, &clientKey.PublicKey, caKey) + require.NoError(t, err, "Failed to create client certificate") + + // Save client certificate and key + savePEMCertificate(t, filepath.Join(dir, "client.crt"), clientCertDER) + savePEMPrivateKey(t, filepath.Join(dir, "client.key"), clientKey) + + // Verify all files exist + files := []string{"ca.crt", "ca.key", "server.crt", "server.key", "client.crt", "client.key"} + for _, file := range files { + path := filepath.Join(dir, file) + _, err := os.Stat(path) + require.NoError(t, err, "Failed to verify file existence: %s", file) + } +} + +func savePEMCertificate(t *testing.T, path string, derBytes []byte) { + t.Helper() + + certPEM := pem.EncodeToMemory(&pem.Block{ + Type: "CERTIFICATE", + Bytes: derBytes, + }) + err := os.WriteFile(path, certPEM, 0600) + require.NoError(t, err, "Failed to save certificate to %s", path) +} + +func savePEMPrivateKey(t *testing.T, path string, key *ecdsa.PrivateKey) { + t.Helper() + + keyBytes, err := x509.MarshalECPrivateKey(key) + require.NoError(t, err, "Failed to marshal private key") + + keyPEM := pem.EncodeToMemory(&pem.Block{ + Type: "PRIVATE KEY", + Bytes: keyBytes, + }) + err = os.WriteFile(path, keyPEM, 0600) + require.NoError(t, err, "Failed to save private key to %s", path) +} diff --git a/pkg/grpc/interfaces.go b/pkg/grpc/interfaces.go new file mode 100644 index 0000000..85f5680 --- /dev/null +++ b/pkg/grpc/interfaces.go @@ -0,0 +1,21 @@ +package grpc + +import ( + "context" + + "google.golang.org/grpc" +) + +//go:generate mockgen -destination=mock_grpc.go -package=grpc github.com/mfreeman451/serviceradar/pkg/grpc SecurityProvider + +// SecurityProvider defines the interface for gRPC security providers. +type SecurityProvider interface { + // GetClientCredentials returns credentials for client connections + GetClientCredentials(ctx context.Context) (grpc.DialOption, error) + + // GetServerCredentials returns credentials for server connections + GetServerCredentials(ctx context.Context) (grpc.ServerOption, error) + + // Close cleans up any resources + Close() error +} diff --git a/pkg/grpc/mock_grpc.go b/pkg/grpc/mock_grpc.go new file mode 100644 index 0000000..6e37bc2 --- /dev/null +++ b/pkg/grpc/mock_grpc.go @@ -0,0 +1,86 @@ +// Code generated by MockGen. DO NOT EDIT. +// Source: github.com/mfreeman451/serviceradar/pkg/grpc (interfaces: SecurityProvider) +// +// Generated by this command: +// +// mockgen -destination=mock_grpc.go -package=grpc github.com/mfreeman451/serviceradar/pkg/grpc SecurityProvider +// + +// Package grpc is a generated GoMock package. +package grpc + +import ( + context "context" + reflect "reflect" + + gomock "go.uber.org/mock/gomock" + grpc "google.golang.org/grpc" +) + +// MockSecurityProvider is a mock of SecurityProvider interface. +type MockSecurityProvider struct { + ctrl *gomock.Controller + recorder *MockSecurityProviderMockRecorder + isgomock struct{} +} + +// MockSecurityProviderMockRecorder is the mock recorder for MockSecurityProvider. +type MockSecurityProviderMockRecorder struct { + mock *MockSecurityProvider +} + +// NewMockSecurityProvider creates a new mock instance. +func NewMockSecurityProvider(ctrl *gomock.Controller) *MockSecurityProvider { + mock := &MockSecurityProvider{ctrl: ctrl} + mock.recorder = &MockSecurityProviderMockRecorder{mock} + return mock +} + +// EXPECT returns an object that allows the caller to indicate expected use. +func (m *MockSecurityProvider) EXPECT() *MockSecurityProviderMockRecorder { + return m.recorder +} + +// Close mocks base method. +func (m *MockSecurityProvider) Close() error { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "Close") + ret0, _ := ret[0].(error) + return ret0 +} + +// Close indicates an expected call of Close. +func (mr *MockSecurityProviderMockRecorder) Close() *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Close", reflect.TypeOf((*MockSecurityProvider)(nil).Close)) +} + +// GetClientCredentials mocks base method. +func (m *MockSecurityProvider) GetClientCredentials(ctx context.Context) (grpc.DialOption, error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "GetClientCredentials", ctx) + ret0, _ := ret[0].(grpc.DialOption) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// GetClientCredentials indicates an expected call of GetClientCredentials. +func (mr *MockSecurityProviderMockRecorder) GetClientCredentials(ctx any) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetClientCredentials", reflect.TypeOf((*MockSecurityProvider)(nil).GetClientCredentials), ctx) +} + +// GetServerCredentials mocks base method. +func (m *MockSecurityProvider) GetServerCredentials(ctx context.Context) (grpc.ServerOption, error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "GetServerCredentials", ctx) + ret0, _ := ret[0].(grpc.ServerOption) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// GetServerCredentials indicates an expected call of GetServerCredentials. +func (mr *MockSecurityProviderMockRecorder) GetServerCredentials(ctx any) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetServerCredentials", reflect.TypeOf((*MockSecurityProvider)(nil).GetServerCredentials), ctx) +} diff --git a/pkg/grpc/security.go b/pkg/grpc/security.go new file mode 100644 index 0000000..24d5138 --- /dev/null +++ b/pkg/grpc/security.go @@ -0,0 +1,272 @@ +// Package grpc pkg/grpc/security.go provides secure gRPC communication options +package grpc + +import ( + "context" + "crypto/tls" + "crypto/x509" + "fmt" + "log" + "os" + "path/filepath" + "sync" + + "github.com/spiffe/go-spiffe/v2/spiffeid" + "github.com/spiffe/go-spiffe/v2/spiffetls/tlsconfig" + "github.com/spiffe/go-spiffe/v2/workloadapi" + "google.golang.org/grpc" + "google.golang.org/grpc/credentials" + "google.golang.org/grpc/credentials/insecure" +) + +var ( + errFailedToAddCACert = fmt.Errorf("failed to add CA cert to pool") + errUnknownSecurityMode = fmt.Errorf("unknown security mode") +) + +const ( + SecurityModeNone SecurityMode = "none" + SecurityModeTLS SecurityMode = "tls" + SecurityModeSpiffe SecurityMode = "spiffe" + SecurityModeMTLS SecurityMode = "mtls" +) + +// SecurityMode defines the type of security to use. +type SecurityMode string + +// SecurityConfig holds common security configuration. +type SecurityConfig struct { + Mode SecurityMode + CertDir string + ServerName string + TrustDomain string // For SPIFFE + WorkloadSocket string // For SPIFFE +} + +// NoSecurityProvider implements SecurityProvider with no security (development only). +type NoSecurityProvider struct{} + +func (*NoSecurityProvider) GetClientCredentials(context.Context) (grpc.DialOption, error) { + return grpc.WithTransportCredentials(insecure.NewCredentials()), nil +} + +func (*NoSecurityProvider) GetServerCredentials(context.Context) (grpc.ServerOption, error) { + return grpc.Creds(insecure.NewCredentials()), nil +} + +func (*NoSecurityProvider) Close() error { + return nil +} + +// TLSProvider implements SecurityProvider with basic TLS. +type TLSProvider struct { + config *SecurityConfig + clientCreds credentials.TransportCredentials + serverCreds credentials.TransportCredentials +} + +func NewTLSProvider(config *SecurityConfig) (*TLSProvider, error) { + clientCreds, err := loadTLSCredentials(config, false) + if err != nil { + return nil, fmt.Errorf("failed to load client creds: %w", err) + } + + serverCreds, err := loadTLSCredentials(config, true) + if err != nil { + return nil, fmt.Errorf("failed to load server creds: %w", err) + } + + return &TLSProvider{ + config: config, + clientCreds: clientCreds, + serverCreds: serverCreds, + }, nil +} + +func (p *TLSProvider) GetClientCredentials(context.Context) (grpc.DialOption, error) { + return grpc.WithTransportCredentials(p.clientCreds), nil +} + +func (p *TLSProvider) GetServerCredentials(context.Context) (grpc.ServerOption, error) { + return grpc.Creds(p.serverCreds), nil +} + +func (*TLSProvider) Close() error { + return nil +} + +// MTLSProvider implements SecurityProvider with mutual TLS. +type MTLSProvider struct { + TLSProvider +} + +func NewMTLSProvider(config *SecurityConfig) (*MTLSProvider, error) { + tlsProvider, err := NewTLSProvider(config) + if err != nil { + return nil, err + } + + return &MTLSProvider{ + TLSProvider: *tlsProvider, + }, nil +} + +// SpiffeProvider implements SecurityProvider using SPIFFE workload API. +type SpiffeProvider struct { + config *SecurityConfig + client *workloadapi.Client + source *workloadapi.X509Source + closeOnce sync.Once +} + +func NewSpiffeProvider(config *SecurityConfig) (*SpiffeProvider, error) { + if config.WorkloadSocket == "" { + config.WorkloadSocket = "unix:/run/spire/sockets/agent.sock" + } + + // Create new workload API client + client, err := workloadapi.New( + context.Background(), + workloadapi.WithAddr(config.WorkloadSocket), + ) + if err != nil { + return nil, fmt.Errorf("failed to create workload API client: %w", err) + } + + // Create X.509 source + source, err := workloadapi.NewX509Source( + context.Background(), + workloadapi.WithClient(client), + ) + if err != nil { + return nil, fmt.Errorf("failed to create X.509 source: %w", err) + } + + return &SpiffeProvider{ + config: config, + client: client, + source: source, + }, nil +} + +func (p *SpiffeProvider) GetClientCredentials(_ context.Context) (grpc.DialOption, error) { + // Get expected server ID + serverID, err := spiffeid.FromString(p.config.TrustDomain) + if err != nil { + return nil, fmt.Errorf("invalid server SPIFFE ID: %w", err) + } + + // Create TLS config for client + tlsConfig := tlsconfig.MTLSClientConfig(p.source, p.source, tlsconfig.AuthorizeID(serverID)) + + return grpc.WithTransportCredentials(credentials.NewTLS(tlsConfig)), nil +} + +func (p *SpiffeProvider) GetServerCredentials(_ context.Context) (grpc.ServerOption, error) { + // Create TLS config for server with authorized SPIFFE ID pattern + // This authorizes any ID in our trust domain + authorizer := tlsconfig.AuthorizeAny() + + if p.config.TrustDomain != "" { + trustDomain, err := spiffeid.TrustDomainFromString(p.config.TrustDomain) + if err != nil { + return nil, fmt.Errorf("invalid trust domain: %w", err) + } + + authorizer = tlsconfig.AuthorizeMemberOf(trustDomain) + } + + tlsConfig := tlsconfig.MTLSServerConfig(p.source, p.source, authorizer) + + return grpc.Creds(credentials.NewTLS(tlsConfig)), nil +} + +func (p *SpiffeProvider) Close() error { + var err error + + p.closeOnce.Do(func() { + if p.source != nil { + err = p.source.Close() + if err != nil { + log.Printf("Failed to close X.509 source: %v", err) + + return + } + } + + if p.client != nil { + err = p.client.Close() + } + }) + + return err +} + +// NewSecurityProvider creates the appropriate security provider based on mode. +func NewSecurityProvider(config *SecurityConfig) (SecurityProvider, error) { + switch config.Mode { + case SecurityModeNone: + return &NoSecurityProvider{}, nil + + case SecurityModeTLS: + return NewTLSProvider(config) + + case SecurityModeMTLS: + return NewMTLSProvider(config) + + case SecurityModeSpiffe: + return NewSpiffeProvider(config) + + default: + return nil, fmt.Errorf("%w: %s", errUnknownSecurityMode, config.Mode) + } +} + +func loadTLSCredentials(config *SecurityConfig, mutual bool) (credentials.TransportCredentials, error) { + // Load certificate authority + caFile := filepath.Join(config.CertDir, "ca.crt") + + caCert, err := os.ReadFile(caFile) + if err != nil { + return nil, fmt.Errorf("failed to read CA cert: %w", err) + } + + certPool := x509.NewCertPool() + if !certPool.AppendCertsFromPEM(caCert) { + return nil, errFailedToAddCACert + } + + // Load server certificates + serverCert := filepath.Join(config.CertDir, "server.crt") + serverKey := filepath.Join(config.CertDir, "server.key") + + cert, err := tls.LoadX509KeyPair(serverCert, serverKey) + if err != nil { + return nil, fmt.Errorf("failed to load server cert/key: %w", err) + } + + // Configure TLS + tlsConfig := &tls.Config{ + Certificates: []tls.Certificate{cert}, + RootCAs: certPool, + MinVersion: tls.VersionTLS13, + } + + if mutual { + tlsConfig.ClientCAs = certPool + tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert + + // For mTLS, also load client certificates + clientCert := filepath.Join(config.CertDir, "client.crt") + clientKey := filepath.Join(config.CertDir, "client.key") + + clientPair, err := tls.LoadX509KeyPair(clientCert, clientKey) + if err != nil { + return nil, fmt.Errorf("failed to load client cert/key: %w", err) + } + + tlsConfig.Certificates = append(tlsConfig.Certificates, clientPair) + } + + return credentials.NewTLS(tlsConfig), nil +} diff --git a/pkg/grpc/security_test.go b/pkg/grpc/security_test.go new file mode 100644 index 0000000..912ccbf --- /dev/null +++ b/pkg/grpc/security_test.go @@ -0,0 +1,282 @@ +package grpc + +import ( + "context" + "os" + "path/filepath" + "strings" + "testing" + "time" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" + "google.golang.org/grpc" +) + +// TestNoSecurityProvider tests the NoSecurityProvider implementation. +func TestNoSecurityProvider(t *testing.T) { + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + provider := &NoSecurityProvider{} + + t.Run("GetClientCredentials", func(t *testing.T) { + opt, err := provider.GetClientCredentials(ctx) + require.NoError(t, err) + require.NotNil(t, opt) + }) + + t.Run("GetServerCredentials", func(t *testing.T) { + opt, err := provider.GetServerCredentials(ctx) + require.NoError(t, err) + require.NotNil(t, opt) + + // Create server with a timeout to avoid hanging + s := grpc.NewServer(opt) + defer s.Stop() + assert.NotNil(t, s) + }) + + t.Run("Close", func(t *testing.T) { + err := provider.Close() + assert.NoError(t, err) + }) +} + +// TestTLSProvider tests the TLSProvider implementation. +func TestTLSProvider(t *testing.T) { + tmpDir := t.TempDir() + generateTestCertificates(t, tmpDir) + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + config := &SecurityConfig{ + Mode: SecurityModeTLS, + CertDir: tmpDir, + } + + t.Run("NewTLSProvider", func(t *testing.T) { + provider, err := NewTLSProvider(config) + require.NoError(t, err) + require.NotNil(t, provider) + assert.NotNil(t, provider.clientCreds) + assert.NotNil(t, provider.serverCreds) + + defer func() { + err := provider.Close() + require.NoError(t, err) + }() + }) + + t.Run("GetClientCredentials", func(t *testing.T) { + provider, err := NewTLSProvider(config) + require.NoError(t, err) + defer provider.Close() + + opt, err := provider.GetClientCredentials(ctx) + require.NoError(t, err) + require.NotNil(t, opt) + }) + + t.Run("GetServerCredentials", func(t *testing.T) { + provider, err := NewTLSProvider(config) + require.NoError(t, err) + defer provider.Close() + + opt, err := provider.GetServerCredentials(ctx) + require.NoError(t, err) + require.NotNil(t, opt) + + s := grpc.NewServer(opt) + defer s.Stop() + assert.NotNil(t, s) + }) + + t.Run("InvalidCertificates", func(t *testing.T) { + invalidConfig := &SecurityConfig{ + Mode: SecurityModeTLS, + CertDir: "/nonexistent", + } + + provider, err := NewTLSProvider(invalidConfig) + require.Error(t, err) + assert.Nil(t, provider) + }) +} + +func TestMTLSProvider(t *testing.T) { + tmpDir := t.TempDir() + generateTestCertificates(t, tmpDir) + + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + config := &SecurityConfig{ + Mode: SecurityModeMTLS, + CertDir: tmpDir, + } + + t.Run("NewMTLSProvider", func(t *testing.T) { + provider, err := NewMTLSProvider(config) + require.NoError(t, err) + require.NotNil(t, provider) + assert.NotNil(t, provider.clientCreds) + assert.NotNil(t, provider.serverCreds) + + defer provider.Close() + }) + + t.Run("GetClientCredentials", func(t *testing.T) { + provider, err := NewMTLSProvider(config) + require.NoError(t, err) + defer provider.Close() + + opt, err := provider.GetClientCredentials(ctx) + require.NoError(t, err) + require.NotNil(t, opt) + }) + + t.Run("MissingClientCerts", func(t *testing.T) { + // Remove client certificates + err := os.Remove(filepath.Join(tmpDir, "client.crt")) + require.NoError(t, err) + + err = os.Remove(filepath.Join(tmpDir, "client.key")) + require.NoError(t, err) + + provider, err := NewMTLSProvider(config) + require.Error(t, err) + assert.Nil(t, provider) + }) +} + +// TestSpiffeProvider tests the SpiffeProvider implementation. +func TestSpiffeProvider(t *testing.T) { + // Skip if no SPIFFE workload API is available + if _, err := os.Stat("/run/spire/sockets/agent.sock"); os.IsNotExist(err) { + t.Skip("Skipping SPIFFE tests - no workload API available") + } + + _, cancel := context.WithTimeout(context.Background(), 2*time.Second) + defer cancel() + + config := &SecurityConfig{ + Mode: SecurityModeSpiffe, + TrustDomain: "example.org", + WorkloadSocket: "unix:/run/spire/sockets/agent.sock", + } + + t.Run("NewSpiffeProvider", func(t *testing.T) { + provider, err := NewSpiffeProvider(config) + if err != nil { + // If we get a connection refused, skip the test + if strings.Contains(err.Error(), "connection refused") { + t.Skip("Skipping test - SPIFFE Workload API not responding") + } + // Otherwise, fail the test with the error + t.Fatalf("Expected NewSpiffeProvider to succeed, got error: %v", err) + } + + assert.NotNil(t, provider) + + if provider != nil { + err := provider.Close() + if err != nil { + t.Fatalf("Expected Close to succeed, got error: %v", err) + return + } + } + }) + + t.Run("InvalidTrustDomain", func(t *testing.T) { + invalidConfig := &SecurityConfig{ + Mode: SecurityModeSpiffe, + TrustDomain: "invalid trust domain", + } + + provider, err := NewSpiffeProvider(invalidConfig) + require.Error(t, err) + assert.Nil(t, provider) + }) +} + +// TestNewSecurityProvider tests the factory function for creating security providers. +// TestNewSecurityProvider tests the factory function for creating security providers. +func TestNewSecurityProvider(t *testing.T) { + tmpDir := t.TempDir() + generateTestCertificates(t, tmpDir) + + tests := []struct { + name string + config *SecurityConfig + expectError bool + }{ + { + name: "NoSecurity", + config: &SecurityConfig{ + Mode: SecurityModeNone, + }, + expectError: false, + }, + { + name: "TLS", + config: &SecurityConfig{ + Mode: SecurityModeTLS, + CertDir: tmpDir, + }, + expectError: false, + }, + { + name: "MTLS", + config: &SecurityConfig{ + Mode: SecurityModeMTLS, + CertDir: tmpDir, + }, + expectError: false, // Should now pass with generated client certs + }, + /* + { + name: "SPIFFE", + config: &SecurityConfig{ + Mode: SecurityModeSpiffe, + TrustDomain: "example.org", + }, + expectError: true, // Will fail without Workload API + }, + */ + { + name: "Invalid Mode", + config: &SecurityConfig{ + Mode: "invalid", + }, + expectError: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + provider, err := NewSecurityProvider(tt.config) + if tt.expectError { + require.Error(t, err) + assert.Nil(t, provider) + + return + } + + require.NoError(t, err) + assert.NotNil(t, provider) + + // Test basic provider operations if not expecting error + opt, err := provider.GetClientCredentials(ctx) + require.NoError(t, err) + assert.NotNil(t, opt) + + err = provider.Close() + assert.NoError(t, err) + }) + } +} diff --git a/pkg/scan/combined_scanner.go b/pkg/scan/combined_scanner.go index bb0e4de..8e0991c 100644 --- a/pkg/scan/combined_scanner.go +++ b/pkg/scan/combined_scanner.go @@ -69,9 +69,11 @@ func (s *CombinedScanner) Scan(ctx context.Context, targets []models.Target) (<- // Pass total hosts count through result metadata for i := range targets { - targets[i].Metadata = map[string]interface{}{ - "total_hosts": totalHosts, + if targets[i].Metadata == nil { + targets[i].Metadata = make(map[string]interface{}) } + + targets[i].Metadata["total_hosts"] = totalHosts } // Handle single scanner cases @@ -113,79 +115,64 @@ func (s *CombinedScanner) handleSingleScannerCase(ctx context.Context, targets s // handleMixedScanners manages scanning with both TCP and ICMP scanners. func (s *CombinedScanner) handleMixedScanners(ctx context.Context, targets scanTargets) (<-chan models.Result, error) { - results := make(chan models.Result) + // Buffer for all potential results + results := make(chan models.Result, len(targets.tcp)+len(targets.icmp)) var wg sync.WaitGroup - errChan := make(chan error, errorChannelSize) + errChan := make(chan error, errorChannelSize) // One potential error from each scanner // Start TCP scanner if needed - if err := s.startTCPScanner(ctx, targets.tcp, &wg, results); err != nil { - return nil, err + if len(targets.tcp) > 0 { + wg.Add(1) + + go func() { + defer wg.Done() + + tcpResults, err := s.tcpScanner.Scan(ctx, targets.tcp) + if err != nil { + errChan <- fmt.Errorf("TCP scan error: %w", err) + return + } + + s.forwardResults(ctx, tcpResults, results) + }() } // Start ICMP scanner if available and needed if s.icmpScanner != nil && len(targets.icmp) > 0 { - if err := s.startICMPScanner(ctx, targets.icmp, &wg, results); err != nil { - return nil, err - } - } else if len(targets.icmp) > 0 { - log.Printf("Skipping ICMP scan of %d targets - ICMP scanner not available", len(targets.icmp)) - } + wg.Add(1) - // Close results when both scanners are done - go func() { - wg.Wait() - close(results) - close(errChan) - }() - - return results, nil -} + go func() { + defer wg.Done() -// startTCPScanner initializes and starts the TCP scanner if TCP targets exist. -func (s *CombinedScanner) startTCPScanner( - ctx context.Context, targets []models.Target, wg *sync.WaitGroup, results chan models.Result) error { - if len(targets) == 0 { - return nil - } + icmpResults, err := s.icmpScanner.Scan(ctx, targets.icmp) + if err != nil { + errChan <- fmt.Errorf("ICMP scan error: %w", err) + return + } - tcpResults, err := s.tcpScanner.Scan(ctx, targets) - if err != nil { - return fmt.Errorf("TCP scan error: %w", err) + s.forwardResults(ctx, icmpResults, results) + }() } - wg.Add(1) - + // Wait for completion in a separate goroutine go func() { - defer wg.Done() - - s.forwardResults(ctx, tcpResults, results) + wg.Wait() + close(results) + close(errChan) }() - return nil -} - -// startICMPScanner initializes and starts the ICMP scanner if ICMP targets exist. -func (s *CombinedScanner) startICMPScanner( - ctx context.Context, targets []models.Target, wg *sync.WaitGroup, results chan models.Result) error { - if len(targets) == 0 { - return nil - } - - icmpResults, err := s.icmpScanner.Scan(ctx, targets) - if err != nil { - return fmt.Errorf("ICMP scan error: %w", err) + // Check for any immediate errors + select { + case err := <-errChan: + if err != nil { + return nil, err + } + default: } - wg.Add(1) - - go func() { - defer wg.Done() - s.forwardResults(ctx, icmpResults, results) - }() - - return nil + return results, nil } func (*CombinedScanner) separateTargets(targets []models.Target) scanTargets { @@ -211,12 +198,12 @@ func (*CombinedScanner) separateTargets(targets []models.Target) scanTargets { func (s *CombinedScanner) forwardResults(ctx context.Context, in <-chan models.Result, out chan<- models.Result) { for { select { - case r, ok := <-in: + case result, ok := <-in: if !ok { return } select { - case out <- r: + case out <- result: case <-ctx.Done(): return case <-s.done: diff --git a/pkg/scan/icmp_scanner.go b/pkg/scan/icmp_scanner.go index 61c7806..f8cde1b 100644 --- a/pkg/scan/icmp_scanner.go +++ b/pkg/scan/icmp_scanner.go @@ -1,4 +1,3 @@ -// Package scan pkg/scan/icmp_scanner.go package scan import ( @@ -10,22 +9,20 @@ import ( "net" "os" "sync" + "sync/atomic" "syscall" "time" "github.com/mfreeman451/serviceradar/pkg/models" "golang.org/x/net/icmp" - "golang.org/x/net/ipv4" ) const ( - scannerShutdownTimeout = 5 * time.Second - maxPacketSize = 1500 - templateSize = 8 - packetReadDeadline = 100 * time.Millisecond - listenForReplyIdleTimeout = 2 * time.Second - setReadDeadlineTimeout = 100 * time.Millisecond - idleTimeoutMultiplier = 2 + maxPacketSize = 1500 + templateSize = 8 + packetReadDeadline = 100 * time.Millisecond + listenerStartDelay = 10 * time.Millisecond + responseWaitDelay = 100 * time.Millisecond ) var ( @@ -40,42 +37,36 @@ type ICMPScanner struct { done chan struct{} rawSocket int template []byte - responses map[string]*pingResponse - mu sync.RWMutex - listenerWg sync.WaitGroup + responses sync.Map } type pingResponse struct { - received int - totalTime time.Duration - lastSeen time.Time - sendTime time.Time - dropped int - sent int + received atomic.Int64 + totalTime atomic.Int64 + lastSeen atomic.Value + sendTime atomic.Value + dropped atomic.Int64 + sent atomic.Int64 } func NewICMPScanner(timeout time.Duration, concurrency, count int) (*ICMPScanner, error) { - // Validate parameters before proceeding if timeout <= 0 || concurrency <= 0 || count <= 0 { return nil, errInvalidParameters } + fd, err := syscall.Socket(syscall.AF_INET, syscall.SOCK_RAW, syscall.IPPROTO_ICMP) + if err != nil { + return nil, fmt.Errorf("failed to create socket: %w", err) + } + s := &ICMPScanner{ timeout: timeout, concurrency: concurrency, count: count, done: make(chan struct{}), - responses: make(map[string]*pingResponse), - } - - // Create raw socket - fd, err := syscall.Socket(syscall.AF_INET, syscall.SOCK_RAW, syscall.IPPROTO_ICMP) - if err != nil { - return nil, fmt.Errorf("%w: %w", errInvalidSocket, err) + rawSocket: fd, } - s.rawSocket = fd - s.buildTemplate() return s, nil @@ -86,74 +77,177 @@ func (s *ICMPScanner) Scan(ctx context.Context, targets []models.Target) (<-chan return nil, errInvalidSocket } - results := make(chan models.Result) + results := make(chan models.Result, len(targets)) rateLimit := time.Second / time.Duration(s.concurrency) - // Start listener go s.listenForReplies(ctx) + time.Sleep(listenerStartDelay) go func() { defer close(results) + s.processTargets(ctx, targets, results, rateLimit) + }() - for _, target := range targets { + return results, nil +} + +func (s *ICMPScanner) processTargets(ctx context.Context, targets []models.Target, results chan<- models.Result, rateLimit time.Duration) { + batchSize := s.concurrency + for i := 0; i < len(targets); i += batchSize { + end := i + batchSize + if end > len(targets) { + end = len(targets) + } + + batch := targets[i:end] + + var wg sync.WaitGroup + + for _, target := range batch { if target.Mode != models.ModeICMP { continue } - // Initialize response for this target - s.mu.Lock() - if _, exists := s.responses[target.Host]; !exists { - s.responses[target.Host] = &pingResponse{} + wg.Add(1) + + go func(target models.Target) { + defer wg.Done() + s.sendPingsToTarget(ctx, target, rateLimit) + }(target) + } + + wg.Wait() + time.Sleep(responseWaitDelay) + + for _, target := range batch { + if target.Mode != models.ModeICMP { + continue } - s.mu.Unlock() - - // Send pings and track sent count - for i := 0; i < s.count; i++ { - select { - case <-ctx.Done(): - return - case <-s.done: - return - default: - s.mu.Lock() - s.responses[target.Host].sent++ - s.mu.Unlock() - - if err := s.sendPing(net.ParseIP(target.Host)); err != nil { - log.Printf("Error sending ping to %s: %v", target.Host, err) - s.mu.Lock() - s.responses[target.Host].dropped++ - s.mu.Unlock() - } - } - time.Sleep(rateLimit) + + s.sendResultsForTarget(ctx, results, target) + } + } +} + +func (s *ICMPScanner) sendPingsToTarget(ctx context.Context, target models.Target, rateLimit time.Duration) { + resp := &pingResponse{} + resp.lastSeen.Store(time.Time{}) + resp.sendTime.Store(time.Now()) + s.responses.Store(target.Host, resp) + + for i := 0; i < s.count; i++ { + select { + case <-ctx.Done(): + return + case <-s.done: + return + default: + resp.sent.Add(1) + + if err := s.sendPing(net.ParseIP(target.Host)); err != nil { + log.Printf("Error sending ping to %s: %v", target.Host, err) + resp.dropped.Add(1) } - // Calculate results - s.mu.RLock() + time.Sleep(rateLimit) + } + } +} + +func (s *ICMPScanner) sendResultsForTarget(ctx context.Context, results chan<- models.Result, target models.Target) { + value, ok := s.responses.Load(target.Host) + if !ok { + return + } + + resp := value.(*pingResponse) + received := resp.received.Load() + sent := resp.sent.Load() + totalTime := resp.totalTime.Load() + lastSeen := resp.lastSeen.Load().(time.Time) + + avgResponseTime := time.Duration(0) + if received > 0 { + avgResponseTime = time.Duration(totalTime) / time.Duration(received) + } + + packetLoss := float64(0) + if sent > 0 { + packetLoss = float64(sent-received) / float64(sent) * 100 + } + + select { + case results <- models.Result{ + Target: target, + Available: received > 0, + RespTime: avgResponseTime, + PacketLoss: packetLoss, + LastSeen: lastSeen, + FirstSeen: time.Now(), + }: + case <-ctx.Done(): + return + case <-s.done: + return + } + + s.responses.Delete(target.Host) +} - resp := s.responses[target.Host] +func (s *ICMPScanner) listenForReplies(ctx context.Context) { + conn, err := icmp.ListenPacket("ip4:icmp", "0.0.0.0") + if err != nil { + log.Printf("Failed to start ICMP listener: %v", err) + return + } + defer func(conn *icmp.PacketConn) { + err := conn.Close() + if err != nil { + log.Printf("Failed to close ICMP listener: %v", err) + } + }(conn) + + buffer := make([]byte, maxPacketSize) + + for { + select { + case <-ctx.Done(): + return + case <-s.done: + return + default: + if err := conn.SetReadDeadline(time.Now().Add(packetReadDeadline)); err != nil { + continue + } - var avgResponseTime time.Duration + _, peer, err := conn.ReadFrom(buffer) + if err != nil { + if !os.IsTimeout(err) { + log.Printf("Error reading ICMP packet: %v", err) + } - if resp.received > 0 { - avgResponseTime = resp.totalTime / time.Duration(resp.received) + continue } - packetLoss := float64(resp.sent-resp.received) / float64(resp.sent) * 100 - s.mu.RUnlock() + ipStr := peer.String() - results <- models.Result{ - Target: target, - Available: resp.received > 0, - RespTime: avgResponseTime, - PacketLoss: packetLoss, - LastSeen: resp.lastSeen, + value, ok := s.responses.Load(ipStr) + if !ok { + continue } - } - }() - return results, nil + resp := value.(*pingResponse) + + resp.received.Add(1) + + now := time.Now() + + sendTime := resp.sendTime.Load().(time.Time) + + resp.totalTime.Add(now.Sub(sendTime).Nanoseconds()) + resp.lastSeen.Store(now) + } + } } const ( @@ -178,9 +272,6 @@ func (s *ICMPScanner) buildTemplate() { binary.BigEndian.PutUint16(s.template[templateChecksum:], s.calculateChecksum(s.template)) } -// calculateChecksum calculates the ICMP checksum for a byte slice. -// The checksum is the one's complement of the sum of the 16-bit integers in the data. -// If the data has an odd length, the last byte is padded with zero. func (*ICMPScanner) calculateChecksum(data []byte) uint16 { var ( sum uint32 @@ -210,9 +301,7 @@ func (*ICMPScanner) calculateChecksum(data []byte) uint16 { } func (s *ICMPScanner) sendPing(ip net.IP) error { - const ( - addrSize = 4 - ) + const addrSize = 4 var addr [addrSize]byte @@ -222,123 +311,17 @@ func (s *ICMPScanner) sendPing(ip net.IP) error { Addr: addr, } - s.mu.Lock() - if resp, exists := s.responses[ip.String()]; exists { - resp.sendTime = time.Now() + if value, ok := s.responses.Load(ip.String()); ok { + resp := value.(*pingResponse) + resp.sendTime.Store(time.Now()) } - s.mu.Unlock() return syscall.Sendto(s.rawSocket, s.template, 0, &dest) } -func (s *ICMPScanner) listenForReplies(ctx context.Context) { - conn, err := icmp.ListenPacket("ip4:icmp", "0.0.0.0") - if err != nil { - log.Printf("Failed to start ICMP listener: %v", err) - return - } - - s.listenerWg.Add(1) - - defer func() { - s.closeConn(conn) - s.listenerWg.Done() - }() - - packet := make([]byte, maxPacketSize) - - // Create a timeout timer for idle shutdown - idleTimeout := time.NewTimer(s.timeout * idleTimeoutMultiplier) - defer idleTimeout.Stop() - - for { - select { - case <-ctx.Done(): - return - case <-s.done: - return - case <-idleTimeout.C: - // If we've been idle too long, shut down - log.Printf("ICMP listener idle timeout, shutting down") - return - default: - // Set read deadline to ensure we don't block forever - if err := conn.SetReadDeadline(time.Now().Add(setReadDeadlineTimeout)); err != nil { - continue - } - - n, peer, err := conn.ReadFrom(packet) - if err != nil { - // Handle timeout by continuing - var netErr net.Error - if errors.As(err, &netErr) && netErr.Timeout() { - continue - } - - log.Printf("Error reading ICMP packet: %v", err) - - continue - } - - // Reset idle timeout since we got a packet - idleTimeout.Reset(s.timeout * idleTimeoutMultiplier) - - s.processICMPMessage(packet[:n], peer) - } - } -} - -func (*ICMPScanner) closeConn(conn *icmp.PacketConn) { - if err := conn.Close(); err != nil { - log.Printf("Failed to close ICMP listener: %v", err) - } -} - -func (s *ICMPScanner) processICMPMessage(data []byte, peer net.Addr) { - msg, err := icmp.ParseMessage(1, data) - if err != nil { - return - } - - if msg.Type == ipv4.ICMPTypeEchoReply { - s.updateResponse(peer.String()) - } -} - -func (s *ICMPScanner) updateResponse(ipStr string) { - s.mu.Lock() - defer s.mu.Unlock() - - if resp, exists := s.responses[ipStr]; exists { - resp.received++ - - resp.lastSeen = time.Now() - - if !resp.sendTime.IsZero() { - resp.totalTime += time.Since(resp.sendTime) - } - } -} - func (s *ICMPScanner) Stop() error { - // Signal shutdown close(s.done) - // Wait for listener with timeout - done := make(chan struct{}) - go func() { - s.listenerWg.Wait() - close(done) - }() - - // Wait with timeout - select { - case <-done: - // Normal shutdown - case <-time.After(scannerShutdownTimeout): - log.Printf("Warning: ICMP listener shutdown timed out") - } - if s.rawSocket != 0 { if err := syscall.Close(s.rawSocket); err != nil { return fmt.Errorf("failed to close raw socket: %w", err) diff --git a/pkg/scan/tcp_scanner.go b/pkg/scan/tcp_scanner.go index ca2da31..3a92287 100644 --- a/pkg/scan/tcp_scanner.go +++ b/pkg/scan/tcp_scanner.go @@ -15,7 +15,7 @@ type TCPScanner struct { timeout time.Duration concurrency int done chan struct{} - scan func(context.Context, []models.Target) (<-chan models.Result, error) + // scan func(context.Context, []models.Target) (<-chan models.Result, error) } func NewTCPScanner(timeout time.Duration, concurrency int) *TCPScanner { @@ -32,64 +32,82 @@ func (s *TCPScanner) Stop() error { } func (s *TCPScanner) Scan(ctx context.Context, targets []models.Target) (<-chan models.Result, error) { - if s.scan != nil { - return s.scan(ctx, targets) - } + results := make(chan models.Result, len(targets)) + targetChan := make(chan models.Target, s.concurrency) + + scanCtx, cancel := context.WithCancel(ctx) + + // Launch the scan operation + s.launchScan(scanCtx, cancel, targets, targetChan, results) - results := make(chan models.Result) - targetChan := make(chan models.Target) + return results, nil +} +func (s *TCPScanner) launchScan( + ctx context.Context, + cancel context.CancelFunc, + targets []models.Target, + targetChan chan models.Target, + results chan models.Result) { var wg sync.WaitGroup + // Start worker pool + s.startWorkerPool(ctx, &wg, targetChan, results) + + // Start target feeder + go s.feedTargets(ctx, targets, targetChan) + + // Start completion handler + go s.handleCompletion(&wg, cancel, results) +} + +func (s *TCPScanner) startWorkerPool(ctx context.Context, wg *sync.WaitGroup, targetChan chan models.Target, results chan models.Result) { for i := 0; i < s.concurrency; i++ { wg.Add(1) - go s.worker(ctx, &wg, targetChan, results) + go s.runWorker(ctx, wg, targetChan, results) } +} - // Feed targets to workers - go func() { - defer close(targetChan) +func (s *TCPScanner) runWorker(ctx context.Context, wg *sync.WaitGroup, targetChan chan models.Target, results chan models.Result) { + defer wg.Done() - for _, target := range targets { - select { - case <-ctx.Done(): - return - case <-s.done: + for { + select { + case target, ok := <-targetChan: + if !ok { return - case targetChan <- target: } - } - }() - - // Close results when all workers are done - go func() { - wg.Wait() - close(results) - }() - return results, nil + s.scanTarget(ctx, target, results) + case <-ctx.Done(): + return + case <-s.done: + return + } + } } -func (s *TCPScanner) worker(ctx context.Context, wg *sync.WaitGroup, targets <-chan models.Target, results chan<- models.Result) { - defer wg.Done() +func (s *TCPScanner) feedTargets(ctx context.Context, targets []models.Target, targetChan chan models.Target) { + defer close(targetChan) - for { + for _, target := range targets { select { + case targetChan <- target: case <-ctx.Done(): return case <-s.done: return - case target, ok := <-targets: - if !ok { - return - } - - s.scanTarget(ctx, target, results) } } } +func (*TCPScanner) handleCompletion(wg *sync.WaitGroup, cancel context.CancelFunc, results chan models.Result) { + wg.Wait() + cancel() // Cleanup context + close(results) +} + func (s *TCPScanner) scanTarget(ctx context.Context, target models.Target, results chan<- models.Result) { start := time.Now() result := models.Result{ @@ -108,8 +126,8 @@ func (s *TCPScanner) scanTarget(ctx context.Context, target models.Target, resul addr := net.JoinHostPort(target.Host, strconv.Itoa(target.Port)) conn, err := d.DialContext(connCtx, "tcp", addr) - result.RespTime = time.Since(start) + if err != nil { result.Error = err result.Available = false @@ -117,14 +135,14 @@ func (s *TCPScanner) scanTarget(ctx context.Context, target models.Target, resul result.Available = true if err := conn.Close(); err != nil { - log.Print("Error closing connection: ", err) - return + log.Printf("Error closing connection: %v", err) } } + // Send result with proper context handling select { + case results <- result: case <-ctx.Done(): case <-s.done: - case results <- result: } } diff --git a/pkg/sweeper/base_processor.go b/pkg/sweeper/base_processor.go index 2aefed0..613947f 100644 --- a/pkg/sweeper/base_processor.go +++ b/pkg/sweeper/base_processor.go @@ -2,16 +2,17 @@ package sweeper import ( "context" + "log" "sync" "time" "github.com/mfreeman451/serviceradar/pkg/models" ) -type ProcessorLocker interface { - RLock() - RUnlock() -} +const ( + startingBufferSize = 16 + portCountDivisor = 4 +) type BaseProcessor struct { mu sync.RWMutex @@ -20,16 +21,127 @@ type BaseProcessor struct { lastSweepTime time.Time firstSeenTimes map[string]time.Time totalHosts int + hostResultPool *sync.Pool + portResultPool *sync.Pool + portCount int // Number of ports being scanned + config *models.Config } -func NewBaseProcessor() *BaseProcessor { - return &BaseProcessor{ +func NewBaseProcessor(config *models.Config) *BaseProcessor { + portCount := len(config.Ports) + if portCount == 0 { + portCount = 100 + } + + // Create pools before initializing the processor + hostPool := &sync.Pool{ + New: func() interface{} { + return &models.HostResult{ + PortResults: make([]*models.PortResult, 0, startingBufferSize), + } + }, + } + + portPool := &sync.Pool{ + New: func() interface{} { + return &models.PortResult{} + }, + } + + p := &BaseProcessor{ hostMap: make(map[string]*models.HostResult), portCounts: make(map[int]int), firstSeenTimes: make(map[string]time.Time), + portCount: portCount, + config: config, + hostResultPool: hostPool, + portResultPool: portPool, + } + + return p +} + +func (p *BaseProcessor) UpdateConfig(config *models.Config) { + // First update the configuration + newPortCount := len(config.Ports) + if newPortCount == 0 { + newPortCount = 100 + } + + log.Printf("Updating port count from %d to %d", p.portCount, newPortCount) + + // Create new pool outside the lock + newPool := &sync.Pool{ + New: func() interface{} { + return &models.HostResult{ + // Start with 25% capacity, will grow if needed + PortResults: make([]*models.PortResult, 0, newPortCount/portCountDivisor), + } + }, + } + + // Take lock only for the update + p.mu.Lock() + if newPortCount != p.portCount { + p.portCount = newPortCount + p.config = config + p.hostResultPool = newPool + } + p.mu.Unlock() + + // Clean up after releasing the lock + if newPortCount != p.portCount { + log.Printf("Cleaning up existing results") + p.cleanup() } } +func (p *BaseProcessor) cleanup() { + p.mu.Lock() + defer p.mu.Unlock() + + log.Printf("Starting cleanup") + + // Get all hosts to clean up + hostsToClean := make([]*models.HostResult, 0, len(p.hostMap)) + for _, host := range p.hostMap { + hostsToClean = append(hostsToClean, host) + } + + // Reset maps first + p.hostMap = make(map[string]*models.HostResult) + p.portCounts = make(map[int]int) + p.firstSeenTimes = make(map[string]time.Time) + p.totalHosts = 0 + p.lastSweepTime = time.Time{} + + // Clean up hosts outside the lock + p.mu.Unlock() + + for _, host := range hostsToClean { + // Clean up port results + for _, pr := range host.PortResults { + // Reset and return port result to pool + pr.Port = 0 + pr.Available = false + pr.RespTime = 0 + pr.Service = "" + p.portResultPool.Put(pr) + } + + // Reset and return host result to pool + host.Host = "" + host.PortResults = host.PortResults[:0] + host.ICMPStatus = nil + host.ResponseTime = 0 + p.hostResultPool.Put(host) + } + + p.mu.Lock() // Re-acquire lock before returning (due to defer) + + log.Printf("Cleanup complete") +} + func (p *BaseProcessor) Process(result *models.Result) error { p.mu.Lock() defer p.mu.Unlock() @@ -42,7 +154,12 @@ func (p *BaseProcessor) Process(result *models.Result) error { switch result.Target.Mode { case models.ModeICMP: + // Log before ICMP processing + log.Printf("Processing ICMP result for %s (available: %v, response time: %v)", + result.Target.Host, result.Available, result.RespTime) + p.processICMPResult(host, result) + case models.ModeTCP: p.processTCPResult(host, result) } @@ -50,26 +167,13 @@ func (p *BaseProcessor) Process(result *models.Result) error { return nil } -func (p *BaseProcessor) updateLastSweepTime() { - now := time.Now() - if now.After(p.lastSweepTime) { - p.lastSweepTime = now - } -} - -func (p *BaseProcessor) updateTotalHosts(result *models.Result) { - if result.Target.Metadata != nil { - if totalHosts, ok := result.Target.Metadata["total_hosts"].(int); ok { - p.totalHosts = totalHosts - } - } -} - func (*BaseProcessor) processICMPResult(host *models.HostResult, result *models.Result) { + // Always initialize ICMPStatus if host.ICMPStatus == nil { host.ICMPStatus = &models.ICMPStatus{} } + // Update availability and response time if result.Available { host.Available = true host.ICMPStatus.Available = true @@ -85,59 +189,10 @@ func (*BaseProcessor) processICMPResult(host *models.HostResult, result *models. if result.RespTime > 0 { host.ResponseTime = result.RespTime } -} -func (p *BaseProcessor) processTCPResult(host *models.HostResult, result *models.Result) { - if result.Available { - host.Available = true - p.updatePortStatus(host, result) - } -} - -func (p *BaseProcessor) updatePortStatus(host *models.HostResult, result *models.Result) { - var found bool - - for _, port := range host.PortResults { - if port.Port == result.Target.Port { - port.Available = true - port.RespTime = result.RespTime - found = true - - break - } - } - - if !found { - host.PortResults = append(host.PortResults, &models.PortResult{ - Port: result.Target.Port, - Available: true, - RespTime: result.RespTime, - }) - p.portCounts[result.Target.Port]++ - } -} - -func (p *BaseProcessor) getOrCreateHost(hostAddr string, now time.Time) *models.HostResult { - host, exists := p.hostMap[hostAddr] - if !exists { - firstSeen := now - if seen, ok := p.firstSeenTimes[hostAddr]; ok { - firstSeen = seen - } else { - p.firstSeenTimes[hostAddr] = firstSeen - } - - host = &models.HostResult{ - Host: hostAddr, - FirstSeen: firstSeen, - LastSeen: now, - Available: false, - PortResults: make([]*models.PortResult, 0), - } - p.hostMap[hostAddr] = host - } - - return host + // Log after processing + log.Printf("Updated ICMP status for %s: available=%v, roundtrip=%v", + host.Host, host.ICMPStatus.Available, host.ICMPStatus.RoundTrip) } func (p *BaseProcessor) GetSummary(ctx context.Context) (*models.SweepSummary, error) { @@ -156,6 +211,7 @@ func (p *BaseProcessor) GetSummary(ctx context.Context) (*models.SweepSummary, e } availableHosts := 0 + icmpHosts := 0 // Track ICMP-responding hosts ports := make([]models.PortCount, 0, len(p.portCounts)) hosts := make([]models.HostResult, 0, len(p.hostMap)) @@ -170,10 +226,17 @@ func (p *BaseProcessor) GetSummary(ctx context.Context) (*models.SweepSummary, e if host.Available { availableHosts++ } + // Count ICMP-responding hosts + if host.ICMPStatus != nil && host.ICMPStatus.Available { + icmpHosts++ + } hosts = append(hosts, *host) } + log.Printf("Summary stats - Total hosts: %d, Available: %d, ICMP responding: %d", + len(p.hostMap), availableHosts, icmpHosts) + actualTotalHosts := len(p.hostMap) if actualTotalHosts == 0 { actualTotalHosts = p.totalHosts @@ -187,3 +250,77 @@ func (p *BaseProcessor) GetSummary(ctx context.Context) (*models.SweepSummary, e Hosts: hosts, }, nil } + +func (p *BaseProcessor) updateLastSweepTime() { + now := time.Now() + if now.After(p.lastSweepTime) { + p.lastSweepTime = now + } +} + +func (p *BaseProcessor) updateTotalHosts(result *models.Result) { + if result.Target.Metadata != nil { + if totalHosts, ok := result.Target.Metadata["total_hosts"].(int); ok { + p.totalHosts = totalHosts + } + } +} + +func (p *BaseProcessor) processTCPResult(host *models.HostResult, result *models.Result) { + if result.Available { + host.Available = true + p.updatePortStatus(host, result) + } +} + +func (p *BaseProcessor) updatePortStatus(host *models.HostResult, result *models.Result) { + var found bool + + for _, port := range host.PortResults { + if port.Port == result.Target.Port { + port.Available = true + port.RespTime = result.RespTime + found = true + + break + } + } + + if !found { + portResult := p.portResultPool.Get().(*models.PortResult) + portResult.Port = result.Target.Port + portResult.Available = true + portResult.RespTime = result.RespTime + + host.PortResults = append(host.PortResults, portResult) + p.portCounts[result.Target.Port]++ + } +} + +func (p *BaseProcessor) getOrCreateHost(hostAddr string, now time.Time) *models.HostResult { + host, exists := p.hostMap[hostAddr] + if !exists { + host = p.hostResultPool.Get().(*models.HostResult) + + // Reset/initialize the host result + host.Host = hostAddr + host.Available = false + host.PortResults = host.PortResults[:0] // Clear slice but keep capacity + host.ICMPStatus = nil + host.ResponseTime = 0 + + firstSeen := now + if seen, ok := p.firstSeenTimes[hostAddr]; ok { + firstSeen = seen + } else { + p.firstSeenTimes[hostAddr] = firstSeen + } + + host.FirstSeen = firstSeen + host.LastSeen = now + + p.hostMap[hostAddr] = host + } + + return host +} diff --git a/pkg/sweeper/base_processor_test.go b/pkg/sweeper/base_processor_test.go new file mode 100644 index 0000000..0ed4f75 --- /dev/null +++ b/pkg/sweeper/base_processor_test.go @@ -0,0 +1,407 @@ +package sweeper + +import ( + "fmt" + "log" + "math/big" + "runtime" + "sync" + "testing" + "time" + + "github.com/mfreeman451/serviceradar/pkg/models" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestBaseProcessor_MemoryManagement(t *testing.T) { + config := createLargePortConfig() + + t.Run("Memory Usage with Many Hosts Few Ports", func(t *testing.T) { + testMemoryUsageWithManyHostsFewPorts(t, config) + }) + + t.Run("Memory Usage with Few Hosts Many Ports", func(t *testing.T) { + testMemoryUsageWithFewHostsManyPorts(t, config) + }) + + t.Run("Memory Release After Cleanup", func(t *testing.T) { + testMemoryReleaseAfterCleanup(t, config) + }) +} + +func createLargePortConfig() *models.Config { + config := &models.Config{ + Ports: make([]int, 2300), + } + for i := range config.Ports { + config.Ports[i] = i + 1 + } + + return config +} + +func testMemoryUsageWithManyHostsFewPorts(t *testing.T, config *models.Config) { + t.Helper() + + processor := NewBaseProcessor(config) + defer processor.cleanup() + + runtime.GC() // Force garbage collection before test + + var memBefore runtime.MemStats + + runtime.ReadMemStats(&memBefore) + + // Process 1000 hosts with only 1-2 ports each + for i := 0; i < 1000; i++ { + host := createHost(i%255, i%2+1) + err := processor.Process(host) + require.NoError(t, err) + } + + runtime.GC() // Force garbage collection before measurement + + var memAfter runtime.MemStats + + runtime.ReadMemStats(&memAfter) + + memGrowth := memAfter.Alloc - memBefore.Alloc + t.Logf("Memory growth: %d bytes", memGrowth) + assert.Less(t, memGrowth, uint64(10*1024*1024), "Memory growth should be less than 10MB") +} + +func testMemoryUsageWithFewHostsManyPorts(t *testing.T, config *models.Config) { + t.Helper() + + processor := NewBaseProcessor(config) + defer processor.cleanup() + + runtime.GC() // Force garbage collection before test + + var memBefore runtime.MemStats + + runtime.ReadMemStats(&memBefore) + + // Process 10 hosts with many ports each + for i := 0; i < 10; i++ { + for port := 1; port <= 1000; port++ { + host := createHost(i, port) + err := processor.Process(host) + require.NoError(t, err) + } + + // Force intermediate GC to prevent memory spikes + if i%2 == 0 { + runtime.GC() + } + } + + runtime.GC() // Force garbage collection before measurement + + var memAfter runtime.MemStats + + runtime.ReadMemStats(&memAfter) + + memGrowth := memAfter.HeapAlloc - memBefore.HeapAlloc + t.Logf("Memory growth with many ports: %d bytes", memGrowth) + + const maxMemoryGrowth = 75 * 1024 * 1024 // 75MB + + assert.Less(t, memGrowth, uint64(maxMemoryGrowth), "Memory growth should be less than 75MB") +} + +func testMemoryReleaseAfterCleanup(t *testing.T, config *models.Config) { + t.Helper() + + processor := NewBaseProcessor(config) + + runtime.GC() // Force GC before test + + var memBefore runtime.MemStats + + runtime.ReadMemStats(&memBefore) + + // Process a moderate amount of data + for i := 0; i < 100; i++ { + for port := 1; port <= 100; port++ { + host := createHost(i, port) + err := processor.Process(host) + require.NoError(t, err) + } + } + + processor.cleanup() // Call cleanup + runtime.GC() // Force GC after cleanup + + var memAfter runtime.MemStats + + runtime.ReadMemStats(&memAfter) + + memDiff := new(big.Int).Sub( + new(big.Int).SetUint64(memAfter.HeapAlloc), + new(big.Int).SetUint64(memBefore.HeapAlloc), + ) + + t.Logf("Memory difference after cleanup: %s bytes", memDiff.String()) + assert.Negative(t, memDiff.Cmp(big.NewInt(1*1024*1024)), "Memory should be mostly released after cleanup") +} + +func createHost(hostIndex, port int) *models.Result { + return &models.Result{ + Target: models.Target{ + Host: fmt.Sprintf("192.168.1.%d", hostIndex), + Port: port, + Mode: models.ModeTCP, + }, + Available: true, + RespTime: time.Millisecond * 10, + } +} + +func TestBaseProcessor_ConcurrentAccess(t *testing.T) { + config := &models.Config{ + Ports: make([]int, 2300), + } + for i := range config.Ports { + config.Ports[i] = i + 1 + } + + processor := NewBaseProcessor(config) + defer processor.cleanup() + + t.Run("Concurrent Processing", func(t *testing.T) { + var wg sync.WaitGroup + + numGoroutines := 100 + resultsPerRoutine := 100 + + // Create a buffered channel to collect any errors + errorChan := make(chan error, numGoroutines*resultsPerRoutine) + + // Launch multiple goroutines to process results concurrently + for i := 0; i < numGoroutines; i++ { + wg.Add(1) + + go func(routineID int) { + defer wg.Done() + + for j := 0; j < resultsPerRoutine; j++ { + result := &models.Result{ + Target: models.Target{ + Host: fmt.Sprintf("192.168.1.%d", routineID), + Port: j%2300 + 1, + Mode: models.ModeTCP, + }, + Available: true, + RespTime: time.Millisecond * 10, + } + if err := processor.Process(result); err != nil { + errorChan <- fmt.Errorf("routine %d, iteration %d: %w", routineID, j, err) + } + } + }(i) + } + + // Wait for all goroutines to complete + wg.Wait() + close(errorChan) + + // Check for any errors + var errors []error + for err := range errorChan { + errors = append(errors, err) + } + + assert.Empty(t, errors, "No errors should occur during concurrent processing") + + // Verify results + assert.Len(t, processor.hostMap, numGoroutines, "Should have expected number of hosts") + + for _, host := range processor.hostMap { + assert.NotNil(t, host) + // Each host should have some port results + assert.NotEmpty(t, host.PortResults) + } + }) +} + +func TestBaseProcessor_ResourceCleanup(t *testing.T) { + config := &models.Config{ + Ports: make([]int, 2300), + } + for i := range config.Ports { + config.Ports[i] = i + 1 + } + + t.Run("Cleanup After Processing", func(t *testing.T) { + processor := NewBaseProcessor(config) + + // Process some results + for i := 0; i < 100; i++ { + result := &models.Result{ + Target: models.Target{ + Host: fmt.Sprintf("192.168.1.%d", i), + Port: i%2300 + 1, + Mode: models.ModeTCP, + }, + Available: true, + RespTime: time.Millisecond * 10, + } + err := processor.Process(result) + require.NoError(t, err) + } + + // Verify we have data + assert.NotEmpty(t, processor.hostMap) + assert.NotEmpty(t, processor.portCounts) + + // Cleanup + processor.cleanup() + + // Verify everything is cleaned up + assert.Empty(t, processor.hostMap) + assert.Empty(t, processor.portCounts) + assert.Empty(t, processor.firstSeenTimes) + assert.True(t, processor.lastSweepTime.IsZero()) + }) + + t.Run("Pool Reuse", func(t *testing.T) { + processor := NewBaseProcessor(config) + defer processor.cleanup() + + // Process results and track allocated hosts + allocatedHosts := make(map[*models.HostResult]struct{}) + + // First batch + for i := 0; i < 10; i++ { + result := &models.Result{ + Target: models.Target{ + Host: fmt.Sprintf("192.168.1.%d", i), + Port: 80, + Mode: models.ModeTCP, + }, + Available: true, + } + err := processor.Process(result) + require.NoError(t, err) + + // Track the allocated host + allocatedHosts[processor.hostMap[result.Target.Host]] = struct{}{} + } + + // Cleanup and process again + processor.cleanup() + + // Second batch + reusedCount := 0 + + for i := 0; i < 10; i++ { + result := &models.Result{ + Target: models.Target{ + Host: fmt.Sprintf("192.168.1.%d", i), + Port: 80, + Mode: models.ModeTCP, + }, + Available: true, + } + err := processor.Process(result) + require.NoError(t, err) + + // Check if the host was reused + if _, exists := allocatedHosts[processor.hostMap[result.Target.Host]]; exists { + reusedCount++ + } + } + + // We should see some reuse of objects from the pool + assert.Positive(t, reusedCount, "Should reuse some objects from the pool") + }) +} + +func TestBaseProcessor_ConfigurationUpdates(t *testing.T) { + initialConfig := &models.Config{ + Ports: make([]int, 100), // Start with fewer ports + } + for i := range initialConfig.Ports { + initialConfig.Ports[i] = i + 1 + } + + t.Run("Handle Config Updates", func(t *testing.T) { + processor := NewBaseProcessor(initialConfig) + defer processor.cleanup() + + // Test initial configuration + assert.Equal(t, 100, processor.portCount, "Initial port count should match config") + + // Process some results with initial config + for i := 0; i < 10; i++ { + result := &models.Result{ + Target: models.Target{ + Host: fmt.Sprintf("192.168.1.%d", i), + Port: i%100 + 1, + Mode: models.ModeTCP, + }, + Available: true, + } + err := processor.Process(result) + require.NoError(t, err, "Processing with initial config should succeed") + } + + // Verify initial state + processor.mu.RLock() + initialHosts := len(processor.hostMap) + + var initialCapacity int + + for _, host := range processor.hostMap { + initialCapacity = cap(host.PortResults) + break + } + processor.mu.RUnlock() + + assert.Equal(t, 10, initialHosts, "Should have 10 hosts initially") + assert.LessOrEqual(t, initialCapacity, 100, "Initial capacity should not exceed port count") + + log.Printf("Initial capacity: %d", initialCapacity) + + // Update to larger port count + newConfig := &models.Config{ + Ports: make([]int, 2300), + } + for i := range newConfig.Ports { + newConfig.Ports[i] = i + 1 + } + + processor.UpdateConfig(newConfig) + + // Verify config update + assert.Equal(t, 2300, processor.portCount, "Port count should be updated") + + // Process more results with new config + for i := 0; i < 10; i++ { + result := &models.Result{ + Target: models.Target{ + Host: fmt.Sprintf("192.168.2.%d", i), // Different subnet to avoid conflicts + Port: i%2300 + 1, + Mode: models.ModeTCP, + }, + Available: true, + } + err := processor.Process(result) + require.NoError(t, err, "Processing with new config should succeed") + } + + // Verify final state + processor.mu.RLock() + defer processor.mu.RUnlock() + + assert.Len(t, processor.hostMap, 20, "Should have 20 hosts total") + + // Check port result capacities + for _, host := range processor.hostMap { + assert.LessOrEqual(t, cap(host.PortResults), 2300, + "Host port results capacity should not exceed new config port count") + } + }) +} diff --git a/pkg/sweeper/sweeper.go b/pkg/sweeper/sweeper.go index 38c68c6..14464e0 100644 --- a/pkg/sweeper/sweeper.go +++ b/pkg/sweeper/sweeper.go @@ -16,8 +16,19 @@ import ( "github.com/mfreeman451/serviceradar/pkg/scan" ) +const ( + cidr32 = 32 + networkAndBroadcast = 2 + maxInt = int(^uint(0) >> 1) // maxInt is the maximum value of int on the current platform + bitsBeforeOverflow = 63 +) + var ( errInvalidSweepMode = errors.New("invalid sweep mode") + errTargetCapacity = errors.New("target capacity overflowed") + errNetworkCapacity = errors.New("error calculating network capacity") + errInvalidCIDRMask = errors.New("invalid CIDR mask") + errCIDRMaskTooLarge = errors.New("CIDR mask is too large to calculate network size") ) // NetworkSweeper implements the Sweeper interface. @@ -28,18 +39,27 @@ type NetworkSweeper struct { processor ResultProcessor mu sync.RWMutex done chan struct{} + lastSweep time.Time // Track last sweep time } func (s *NetworkSweeper) Start(ctx context.Context) error { - ticker := time.NewTicker(s.config.Interval) - defer ticker.Stop() - // Do initial sweep if err := s.runSweep(ctx); err != nil { log.Printf("Initial sweep failed: %v", err) return err } + // Create ticker after initial sweep + ticker := time.NewTicker(s.config.Interval) + defer ticker.Stop() + + // Update last sweep time + s.mu.Lock() + s.lastSweep = time.Now() + s.mu.Unlock() + + log.Printf("Starting sweep cycle with interval: %v", s.config.Interval) + for { select { case <-ctx.Done(): @@ -54,6 +74,84 @@ func (s *NetworkSweeper) Start(ctx context.Context) error { } } +func (s *NetworkSweeper) runSweep(ctx context.Context) error { + s.mu.RLock() + lastSweepTime := s.lastSweep + interval := s.config.Interval + s.mu.RUnlock() + + // Check if enough time has passed + if time.Since(lastSweepTime) < interval { + log.Printf("Skipping sweep, not enough time elapsed since last sweep") + return nil + } + + sweepStart := time.Now() + log.Printf("Starting network sweep at %v", sweepStart.Format(time.RFC3339)) + + // Generate targets + targets, err := s.generateTargets() + if err != nil { + return fmt.Errorf("failed to generate targets: %w", err) + } + + // Start the scan + results, err := s.scanner.Scan(ctx, targets) + if err != nil { + return fmt.Errorf("scan failed: %w", err) + } + + // Track stats without locks + icmpSuccess := 0 + tcpSuccess := 0 + totalResults := 0 + uniqueHosts := make(map[string]struct{}) + + // Process results + for result := range results { + if err := s.processor.Process(&result); err != nil { + log.Printf("Failed to process result: %v", err) + continue + } + + if err := s.store.SaveResult(ctx, &result); err != nil { + log.Printf("Failed to save result: %v", err) + continue + } + + // Update stats + totalResults++ + uniqueHosts[result.Target.Host] = struct{}{} + + if result.Available { + switch result.Target.Mode { + case models.ModeICMP: + icmpSuccess++ + + log.Printf("Host %s responded to ICMP ping (%.2fms)", + result.Target.Host, float64(result.RespTime)/float64(time.Millisecond)) + case models.ModeTCP: + tcpSuccess++ + + log.Printf("Host %s has port %d open (%.2fms)", + result.Target.Host, result.Target.Port, + float64(result.RespTime)/float64(time.Millisecond)) + } + } + } + + // Update last sweep time + s.mu.Lock() + s.lastSweep = sweepStart + s.mu.Unlock() + + duration := time.Since(sweepStart) + log.Printf("Sweep completed in %.2f seconds: %d total results, %d successful (%d ICMP, %d TCP), %d unique hosts", + duration.Seconds(), totalResults, icmpSuccess+tcpSuccess, icmpSuccess, tcpSuccess, len(uniqueHosts)) + + return nil +} + func (s *NetworkSweeper) Stop() error { close(s.done) return s.scanner.Stop() @@ -105,90 +203,138 @@ func (m *SweepMode) MarshalJSON() ([]byte, error) { } func (s *NetworkSweeper) generateTargets() ([]models.Target, error) { - var allTargets []models.Target + // Calculate total targets and unique IPs + targetCapacity, uniqueIPs, err := s.calculateTargetCapacity() + if err != nil { + return nil, fmt.Errorf("failed to calculate target capacity: %w", err) + } + // Pre-allocate slice with calculated capacity + allTargets := make([]models.Target, 0, targetCapacity) + + // Generate targets for each network for _, network := range s.config.Networks { - // First generate all IP addresses - ips, err := generateIPsFromCIDR(network) + targets, err := s.generateTargetsForNetwork(network, uniqueIPs) if err != nil { - return nil, fmt.Errorf("failed to generate IPs for %s: %w", network, err) - } - - // For each IP, create ICMP target if enabled - if containsMode(s.config.SweepModes, models.ModeICMP) { - for _, ip := range ips { - allTargets = append(allTargets, models.Target{ - Host: ip.String(), - Mode: models.ModeICMP, - }) - } + return nil, fmt.Errorf("failed to generate targets for network %s: %w", network, err) } - // For each IP, create TCP targets for each port if enabled - if containsMode(s.config.SweepModes, models.ModeTCP) { - for _, ip := range ips { - for _, port := range s.config.Ports { - allTargets = append(allTargets, models.Target{ - Host: ip.String(), - Port: port, - Mode: models.ModeTCP, - }) - } - } - } + allTargets = append(allTargets, targets...) } - log.Printf("Generated %d targets (%d IPs, %d ports, modes: %v)", + log.Printf("Generated %d targets (%d unique IPs, %d ports, modes: %v)", len(allTargets), - len(allTargets)/(len(s.config.Ports)+1), + len(uniqueIPs), len(s.config.Ports), s.config.SweepModes) return allTargets, nil } -func (s *NetworkSweeper) runSweep(ctx context.Context) error { - // Generate targets - targets, err := s.generateTargets() - if err != nil { - return fmt.Errorf("failed to generate targets: %w", err) +func (s *NetworkSweeper) calculateTargetCapacity() (targetCap int, uniqueIPs map[string]struct{}, err error) { + u := make(map[string]struct{}) + targetCapacity := 0 + + for _, network := range s.config.Networks { + capacity, err := calculateNetworkCapacity(network, s.config.SweepModes, len(s.config.Ports)) + if err != nil { + return 0, nil, fmt.Errorf("%w for network %s: %w", errNetworkCapacity, network, err) + } + + // Check for overflow before adding + if targetCapacity > maxInt-capacity { + return 0, nil, errTargetCapacity + } + + targetCapacity += capacity } - // Start the scan - results, err := s.scanner.Scan(ctx, targets) + return targetCapacity, u, nil +} + +// calculateNetworkCapacity calculates the target capacity for a single network. +func calculateNetworkCapacity(network string, sweepModes []models.SweepMode, numPorts int) (int, error) { + _, ipnet, err := net.ParseCIDR(network) if err != nil { - return fmt.Errorf("scan failed: %w", err) + return 0, fmt.Errorf("%w %s: %w", errInvalidCIDRMask, network, err) } - // Process results as they come in - for result := range results { - // Process the result first - if err := s.processor.Process(&result); err != nil { - log.Printf("Failed to process result: %v", err) + ones, bits := ipnet.Mask.Size() + if bits < ones { + return 0, fmt.Errorf("%w %s: bits (%d) < ones (%d)", errInvalidCIDRMask, network, bits, ones) + } + + shift := bits - ones + + // Ensure the shift is within a safe range + if shift > bitsBeforeOverflow { // 63 because 1 << 64 would overflow on 64-bit systems + return 0, fmt.Errorf("%w %s", errCIDRMaskTooLarge, network) + } + + // Calculate network size, considering /32 + networkSize := 1 << shift + if ones < cidr32 { + networkSize -= networkAndBroadcast + } + + // Calculate target capacity for the network based on enabled modes + capacity := 0 + if containsMode(sweepModes, models.ModeICMP) { + capacity += networkSize + } + + if containsMode(sweepModes, models.ModeTCP) { + // Check for overflow before multiplying + if numPorts > 0 && networkSize > maxInt/numPorts { + return 0, fmt.Errorf("%w", errTargetCapacity) } - // Store the result - if err := s.store.SaveResult(ctx, &result); err != nil { - log.Printf("Failed to save result: %v", err) + capacity += networkSize * numPorts + } + + return capacity, nil +} + +// generateTargetsForNetwork generates targets for a specific network. +func (s *NetworkSweeper) generateTargetsForNetwork(network string, uniqueIPs map[string]struct{}) ([]models.Target, error) { + ips, err := generateIPsFromCIDR(network) + if err != nil { + return nil, fmt.Errorf("failed to generate IPs for %s: %w", network, err) + } + + var targets []models.Target + + for _, ip := range ips { + ipStr := ip.String() + uniqueIPs[ipStr] = struct{}{} + + // Add ICMP target if enabled + if containsMode(s.config.SweepModes, models.ModeICMP) { + targets = append(targets, models.Target{ + Host: ipStr, + Mode: models.ModeICMP, + Metadata: map[string]interface{}{ + "network": network, + }, + }) } - // Log based on scan type - switch result.Target.Mode { - case models.ModeICMP: - if result.Available { - log.Printf("Host %s responded to ICMP ping (%.2fms)", - result.Target.Host, float64(result.RespTime)/float64(time.Millisecond)) - } - case models.ModeTCP: - if result.Available { - log.Printf("Host %s has port %d open (%.2fms)", - result.Target.Host, result.Target.Port, - float64(result.RespTime)/float64(time.Millisecond)) + // Add TCP targets if enabled + if containsMode(s.config.SweepModes, models.ModeTCP) { + for _, port := range s.config.Ports { + targets = append(targets, models.Target{ + Host: ipStr, + Port: port, + Mode: models.ModeTCP, + Metadata: map[string]interface{}{ + "network": network, + }, + }) } } } - return nil + return targets, nil } func containsMode(modes []models.SweepMode, mode models.SweepMode) bool { diff --git a/setup-deb-agent.sh b/setup-deb-agent.sh index bb7ca6d..a11343d 100755 --- a/setup-deb-agent.sh +++ b/setup-deb-agent.sh @@ -2,7 +2,7 @@ # setup-deb-agent.sh set -e # Exit on any error -VERSION=${VERSION:-1.0.10} +VERSION=${VERSION:-1.0.11} echo "Building serviceradar-agent version ${VERSION}" echo "Setting up package structure..." diff --git a/setup-deb-cloud.sh b/setup-deb-cloud.sh index 4f8c3ae..609d23a 100755 --- a/setup-deb-cloud.sh +++ b/setup-deb-cloud.sh @@ -4,7 +4,7 @@ set -e # Exit on any error echo "Setting up package structure..." -VERSION=${VERSION:-1.0.10} +VERSION=${VERSION:-1.0.11} # Create package directory structure PKG_ROOT="serviceradar-cloud_${VERSION}" diff --git a/setup-deb-dusk-checker.sh b/setup-deb-dusk-checker.sh index df8d894..bbdc27a 100755 --- a/setup-deb-dusk-checker.sh +++ b/setup-deb-dusk-checker.sh @@ -4,7 +4,7 @@ set -e # Exit on any error echo "Setting up package structure..." -VERSION=${VERSION:-1.0.10} +VERSION=${VERSION:-1.0.11} # Create package directory structure PKG_ROOT="serviceradar-dusk-checker_${VERSION}" diff --git a/setup-deb-poller.sh b/setup-deb-poller.sh index c46f4b0..d21b5cc 100755 --- a/setup-deb-poller.sh +++ b/setup-deb-poller.sh @@ -4,7 +4,7 @@ set -e # Exit on any error echo "Setting up package structure..." -VERSION=${VERSION:-1.0.10} +VERSION=${VERSION:-1.0.11} # Create package directory structure PKG_ROOT="serviceradar-poller_${VERSION}" diff --git a/web/package-lock.json b/web/package-lock.json index a4cc976..537bd0e 100644 --- a/web/package-lock.json +++ b/web/package-lock.json @@ -1,12 +1,12 @@ { "name": "serviceradar-web", - "version": "1.0.10", + "version": "1.0.11", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "serviceradar-web", - "version": "1.0.10", + "version": "1.0.11", "dependencies": { "@radix-ui/react-navigation-menu": "^1.1.4", "@radix-ui/react-slot": "^1.0.2", diff --git a/web/package.json b/web/package.json index b7693b5..11ba383 100644 --- a/web/package.json +++ b/web/package.json @@ -1,6 +1,6 @@ { "name": "serviceradar-web", - "version": "1.0.10", + "version": "1.0.11", "private": true, "type": "module", "dependencies": {