Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .coderabbit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
reviews:
tools:
checkov:
enabled: false
hadolint:
enabled: false
gitleaks:
enabled: false
24 changes: 24 additions & 0 deletions trivy/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
FROM ubuntu:18.04
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Base image ubuntu:18.04 reached end of life in April 2023.

This image no longer receives security updates, leaving the container vulnerable to unpatched CVEs. Upgrade to a supported LTS release.

📦 Proposed base image update
-FROM ubuntu:18.04
+FROM ubuntu:24.04
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
FROM ubuntu:18.04
FROM ubuntu:24.04
🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile` at line 1, The Dockerfile uses an EOL base image
"ubuntu:18.04"; update the FROM line to a supported LTS (e.g., "ubuntu:22.04" or
"ubuntu:24.04") to restore security updates and rebuild to verify compatibility,
then run container tests and adjust any OS-specific package installs in the
Dockerfile to match the newer Ubuntu release.


ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get install -y \
curl \
wget \
python3 \
python3-pip \
openssh-server
Comment on lines +5 to +10
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Container runs as root and includes SSH server.

No USER directive means the container runs as root. Installing openssh-server and exposing port 22 suggests direct shell access, which combined with root creates significant attack surface.

Consider:

  • Adding a non-root user
  • Removing SSH server if container orchestration provides exec access
  • Using --no-install-recommends to reduce image size
🛡️ Proposed fixes
 RUN apt-get update && apt-get install -y \
+    --no-install-recommends \
     curl \
     wget \
     python3 \
-    python3-pip \
-    openssh-server
+    python3-pip \
+    && rm -rf /var/lib/apt/lists/*

+RUN useradd -r -u 1000 appuser
+
 # ... other instructions ...

+USER appuser
+
-EXPOSE 22 80 443
+EXPOSE 80 443

Also applies to: 22-24

🧰 Tools
🪛 Trivy (0.69.3)

[error] 5-10: 'apt-get' missing '--no-install-recommends'

'--no-install-recommends' flag is missed: 'apt-get update && apt-get install -y curl wget python3 python3-pip openssh-server'

Rule: DS-0029

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile` around lines 5 - 10, The Dockerfile currently installs
openssh-server and has no USER directive, causing the container to run as root
and exposing unnecessary SSH access; remove openssh-server from the apt-get
install list, add --no-install-recommends to the apt-get install command to
shrink the image, and create/drop to a non-root account by adding a non-root
user and a USER directive (e.g., create a user and switch with USER) so the
container no longer runs as root; if runtime shell access is needed rely on the
orchestration runtime exec instead of bundling sshd.


ADD https://example.com/installer/demoapp-bundle.tar.gz /tmp/bundle.tar.gz

RUN tar -xzf /tmp/bundle.tar.gz -C /opt
Comment on lines +12 to +14
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

ADD from remote URL bypasses integrity verification and caching.

Using ADD to fetch remote archives:

  • Cannot verify content integrity (no checksum validation)
  • Susceptible to MITM attacks
  • Breaks Docker layer caching on content changes

Prefer downloading with curl/wget and verifying checksums.

✅ Proposed fix with checksum verification
-ADD https://example.com/installer/demoapp-bundle.tar.gz /tmp/bundle.tar.gz
-
-RUN tar -xzf /tmp/bundle.tar.gz -C /opt
+RUN curl -fsSL -o /tmp/bundle.tar.gz https://example.com/installer/demoapp-bundle.tar.gz \
+    && echo "EXPECTED_SHA256  /tmp/bundle.tar.gz" | sha256sum -c - \
+    && tar -xzf /tmp/bundle.tar.gz -C /opt \
+    && rm /tmp/bundle.tar.gz
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile` around lines 12 - 14, Replace the insecure ADD(remote) step
with an explicit download-and-verify flow: download the archive with a reliable
tool (curl or wget) to /tmp/bundle.tar.gz, verify its checksum (e.g. SHA256)
using the expected checksum supplied via a build ARG/ENV, and only then extract
with the existing tar -xzf /tmp/bundle.tar.gz -C /opt; update the Dockerfile to
remove the ADD line and add the download, checksum verification, and extraction
steps (and expose the checksum via an ARG like BUNDLE_SHA256) so content
integrity and layer caching are preserved.


COPY . /app

WORKDIR /app

RUN pip3 install -r requirements.txt

EXPOSE 22 80 443

CMD ["python3", "/app/server.py"]
13 changes: 13 additions & 0 deletions trivy/Dockerfile.legacy
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
FROM ubuntu:latest
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Container runs as root and exposes SSH port.

The Dockerfile lacks a USER directive, causing the container to run as root (DS-0002). Combined with exposing port 22 (SSH), this creates significant attack surface if the container is compromised.

🛡️ Proposed fix for non-root user
 FROM ubuntu:latest
 
+RUN useradd -r -u 1000 -g root appuser
+
 # ... other instructions ...
 
+USER appuser
+
 CMD ["/app/legacy-agent"]

Also consider whether SSH exposure is necessary, or if exec-based access (docker exec, kubectl exec) would suffice.

Also applies to: 11-11

🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile.legacy` at line 1, The Dockerfile currently starts FROM
ubuntu:latest and runs as root and exposes SSH; fix by creating a non-root user
and switching to it (add steps to create a user/group, set ownership on required
directories, and add a USER directive) and remove or justify any EXPOSE 22
entry; update the Dockerfile's build steps that require root to use temporary
root-stage RUN commands (or use root for those steps then chown) and ensure the
final image uses the new non-root user (reference the Dockerfile's USER
directive and any RUN steps that manipulate filesystem ownership).


ENV API_TOKEN=internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Comment on lines +3 to +5
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Credentials baked into Docker image layers are permanently exposed.

Environment variables set via ENV are embedded in image layers and recoverable by anyone with access to the image (via docker history or layer inspection). Even if these are example keys from AWS documentation, this pattern should never appear in production Dockerfiles.

Secrets should be injected at runtime via:

  • Container orchestration secrets (Kubernetes Secrets, ECS Secrets Manager integration)
  • Docker secrets mount (--secret flag)
  • Environment variables passed at docker run time
🔐 Proposed fix removing baked-in credentials
 FROM ubuntu:latest
 
-ENV API_TOKEN=internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41
-ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
-ENV AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
+# Secrets should be injected at runtime, not baked into image
+# Example: docker run -e API_TOKEN=$API_TOKEN -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ...
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ENV API_TOKEN=internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41
ENV AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
ENV AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
FROM ubuntu:latest
# Secrets should be injected at runtime, not baked into image
# Example: docker run -e API_TOKEN=$API_TOKEN -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ...
🧰 Tools
🪛 Trivy (0.69.3)

[error] 3-3: Secrets passed via build-args or envs or copied secret files

Possible exposure of secret env "API_TOKEN" in ENV

Rule: DS-0031

Learn more

(IaC/Dockerfile)


[error] 4-4: Secrets passed via build-args or envs or copied secret files

Possible exposure of secret env "AWS_ACCESS_KEY_ID" in ENV

Rule: DS-0031

Learn more

(IaC/Dockerfile)


[error] 5-5: Secrets passed via build-args or envs or copied secret files

Possible exposure of secret env "AWS_SECRET_ACCESS_KEY" in ENV

Rule: DS-0031

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/Dockerfile.legacy` around lines 3 - 5, Remove the hard-coded ENV
entries (API_TOKEN, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) from
Dockerfile.legacy so credentials are not baked into image layers; instead update
the deployment/run instructions or container entrypoint to read these values
from runtime-provided sources (Kubernetes Secrets, ECS secret integration,
Docker secrets via --secret, or environment variables passed to docker run) and
validate presence at container start (e.g., fail fast if missing). Ensure no
secret literals remain in the Dockerfile, README, or build context and document
how to inject each secret at runtime.


RUN apt-get update && apt-get install -y curl

COPY . /app

EXPOSE 22

CMD ["/app/legacy-agent"]
51 changes: 51 additions & 0 deletions trivy/iam.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
resource "aws_iam_policy" "wildcard_admin" {
name = "demoapp-wildcard-admin"
description = "Broad admin policy for demoapp service workers"

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = "*"
Resource = "*"
},
{
Effect = "Allow"
Action = ["s3:*", "iam:PassRole", "kms:Decrypt"]
Resource = "*"
}
]
})
Comment on lines +5 to +19
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: IAM policy grants unrestricted *:* permissions.

The policy allows all actions on all resources, violating the principle of least privilege. The second statement with s3:*, iam:PassRole, kms:Decrypt is redundant since * already covers everything.

Scope permissions to specific actions and resources required by the service.

🔒 Example of scoped policy
   policy = jsonencode({
     Version = "2012-10-17"
     Statement = [
       {
         Effect   = "Allow"
-        Action   = "*"
-        Resource = "*"
-      },
-      {
-        Effect   = "Allow"
-        Action   = ["s3:*", "iam:PassRole", "kms:Decrypt"]
-        Resource = "*"
+        Action   = [
+          "s3:GetObject",
+          "s3:PutObject",
+          "s3:ListBucket"
+        ]
+        Resource = [
+          "arn:aws:s3:::demoapp-artifacts-prod",
+          "arn:aws:s3:::demoapp-artifacts-prod/*"
+        ]
       }
     ]
   })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = "*"
Resource = "*"
},
{
Effect = "Allow"
Action = ["s3:*", "iam:PassRole", "kms:Decrypt"]
Resource = "*"
}
]
})
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
]
Resource = [
"arn:aws:s3:::demoapp-artifacts-prod",
"arn:aws:s3:::demoapp-artifacts-prod/*"
]
}
]
})
🧰 Tools
🪛 Trivy (0.69.3)

[error] 5-19: Disallow unrestricted S3 IAM Policies

IAM policy allows 's3:*' action

Rule: AWS-0345

Resource: aws_iam_policy.wildcard_admin

Learn more

(IaC/AWS)


[error] 5-19: Disallow unrestricted S3 IAM Policies

IAM role uses a policy that allows 's3:*' action

Rule: AWS-0345

Resource: aws_iam_policy.wildcard_admin

Learn more

(IaC/AWS)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/iam.tf` around lines 5 - 19, The IAM policy block named policy
currently grants wildcard actions and resources (Action = "*" and Resource =
"*") and also duplicates permissions in the second Statement; replace this with
least-privilege statements that enumerate only the required actions (e.g.,
specific s3 actions instead of "s3:*", "iam:PassRole" only for the particular
role(s), and "kms:Decrypt" only for the specific KMS key ARNs) and narrow
Resource values to the exact S3 bucket ARNs/prefixes, role ARNs, and KMS key
ARNs used by the service; remove the redundant broad Statement, and parametrize
or reference the target ARNs (via variables or data sources) so the policy
(policy -> Statement -> Action/Resource) is tightly scoped.

}

resource "aws_iam_role" "service" {
name = "demoapp-service-role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
AWS = "*"
}
Action = "sts:AssumeRole"
}
]
})
Comment on lines +25 to +36
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Assume role policy allows any AWS account to assume this role.

Setting Principal: { AWS: "*" } permits any AWS principal (any account, user, or role) to assume demoapp-service-role. This effectively makes the role's permissions available to the entire internet.

Restrict the principal to specific trusted accounts, services, or roles.

🔒 Proposed fix with scoped trust policy
   assume_role_policy = jsonencode({
     Version = "2012-10-17"
     Statement = [
       {
         Effect = "Allow"
         Principal = {
-          AWS = "*"
+          Service = "ec2.amazonaws.com"  # Or specific account/role ARN
         }
         Action = "sts:AssumeRole"
+        Condition = {
+          StringEquals = {
+            "aws:SourceAccount" = "123456789012"
+          }
+        }
       }
     ]
   })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
AWS = "*"
}
Action = "sts:AssumeRole"
}
]
})
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com" # Or specific account/role ARN
}
Action = "sts:AssumeRole"
Condition = {
StringEquals = {
"aws:SourceAccount" = "123456789012"
}
}
}
]
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/iam.tf` around lines 25 - 36, The assume role policy for
demoapp-service-role currently uses Principal = { AWS = "*" } which allows any
AWS principal to assume the role; replace the wildcard principal in the
assume_role_policy JSON with a scoped trust policy that lists only the specific
trusted AWS account ARNs, IAM role ARNs, or AWS service principals needed (e.g.,
"AWS": ["arn:aws:iam::123456789012:role/TrustedRole"] or "Service":
"ec2.amazonaws.com"), or derive trusted account(s) dynamically (e.g., via data
lookups) and ensure the Statement only permits those principals; update the
assume_role_policy JSON in the assume_role_policy block to reference those
explicit ARNs instead of "*" and keep Action "sts:AssumeRole" and Effect
"Allow".

}

resource "aws_iam_role_policy_attachment" "service_admin" {
role = aws_iam_role.service.name
policy_arn = aws_iam_policy.wildcard_admin.arn
}

resource "aws_iam_user" "ci" {
name = "demoapp-ci"
}

resource "aws_iam_user_policy_attachment" "ci_admin" {
user = aws_iam_user.ci.name
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}
Comment on lines +44 to +51
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CI user granted AdministratorAccess violates least privilege.

CI/CD pipelines typically need limited permissions (e.g., deploy to specific services, push to ECR). Full administrator access allows the CI user to modify IAM, create resources, access secrets, and potentially escalate privileges if credentials leak.

Create a scoped policy with only the permissions required for CI operations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/iam.tf` around lines 44 - 51, The CI user aws_iam_user.ci is being
granted the broad AdministratorAccess via
aws_iam_user_policy_attachment.ci_admin; replace this by creating a scoped
aws_iam_policy (e.g., aws_iam_policy.ci_policy) that enumerates only the
required CI actions (ECR push/pull, ECS/EKS deploy actions, CloudWatch logs, S3
read/write for artifacts, and SecretsManager/GetSecretValue if needed) and then
attach that policy to aws_iam_user.ci using
aws_iam_user_policy_attachment.ci_admin (or aws_iam_policy_attachment) instead
of the AWS-managed AdministratorAccess ARN; ensure the new policy
least-privileges any resource ARNs (limit to specific repos, clusters, buckets)
and remove the AdministratorAccess reference.

43 changes: 43 additions & 0 deletions trivy/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
terraform {
required_version = ">= 1.5"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}

provider "aws" {
region = "us-east-1"
}

resource "aws_s3_bucket" "artifacts" {
bucket = "demoapp-artifacts-prod"
}
Comment on lines +15 to +17
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing encryption and public access controls on S3 buckets.

Both buckets lack server-side encryption with customer-managed keys (CMK), and the logs bucket has no aws_s3_bucket_public_access_block resource. For production workloads:

  1. Enable SSE-KMS encryption for data-at-rest protection
  2. Add public access block to the logs bucket
  3. Consider enabling versioning on logs for audit trail integrity
🔐 Proposed additions for encryption and access controls
resource "aws_kms_key" "s3" {
  description             = "KMS key for S3 bucket encryption"
  deletion_window_in_days = 7
}

resource "aws_s3_bucket_server_side_encryption_configuration" "artifacts" {
  bucket = aws_s3_bucket.artifacts.id

  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.s3.arn
      sse_algorithm     = "aws:kms"
    }
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "logs" {
  bucket = aws_s3_bucket.logs.id

  rule {
    apply_server_side_encryption_by_default {
      kms_master_key_id = aws_kms_key.s3.arn
      sse_algorithm     = "aws:kms"
    }
  }
}

resource "aws_s3_bucket_public_access_block" "logs" {
  bucket = aws_s3_bucket.logs.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

Also applies to: 33-43

🧰 Tools
🪛 Trivy (0.69.3)

[error] 15-17: S3 encryption should use Customer Managed Keys

Bucket does not encrypt data with a customer managed key.

Rule: AWS-0132

Resource: aws_s3_bucket.artifacts

Learn more

(IaC/AWS)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/main.tf` around lines 15 - 17, Add server-side encryption using a KMS
CMK and public-access blocking/versioning for the S3 buckets: create a KMS key
resource (aws_kms_key.s3) and attach
aws_s3_bucket_server_side_encryption_configuration resources for
aws_s3_bucket.artifacts and aws_s3_bucket.logs using that key and "aws:kms"; add
aws_s3_bucket_public_access_block for aws_s3_bucket.logs with block_public_acls,
block_public_policy, ignore_public_acls, and restrict_public_buckets set to
true; and enable versioning on aws_s3_bucket.logs (aws_s3_bucket.logs versioning
configuration) to preserve an audit trail.


resource "aws_s3_bucket_acl" "artifacts" {
bucket = aws_s3_bucket.artifacts.id
acl = "public-read"
}

resource "aws_s3_bucket_public_access_block" "artifacts" {
bucket = aws_s3_bucket.artifacts.id

block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}
Comment on lines +19 to +31
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: S3 bucket configured for public access exposes data to the internet.

The artifacts bucket uses public-read ACL and explicitly disables all public access block protections. This configuration allows anyone on the internet to read bucket contents, creating significant data exposure risk.

If public access is truly required (e.g., static website hosting), consider:

  • Using CloudFront with Origin Access Control instead of direct public bucket access
  • Limiting to specific objects rather than bucket-wide ACL
  • Adding explicit bucket policy with conditions
🔒 Proposed fix for secure bucket configuration
-resource "aws_s3_bucket_acl" "artifacts" {
-  bucket = aws_s3_bucket.artifacts.id
-  acl    = "public-read"
-}
-
-resource "aws_s3_bucket_public_access_block" "artifacts" {
-  bucket = aws_s3_bucket.artifacts.id
-
-  block_public_acls       = false
-  block_public_policy     = false
-  ignore_public_acls      = false
-  restrict_public_buckets = false
-}
+resource "aws_s3_bucket_public_access_block" "artifacts" {
+  bucket = aws_s3_bucket.artifacts.id
+
+  block_public_acls       = true
+  block_public_policy     = true
+  ignore_public_acls      = true
+  restrict_public_buckets = true
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "aws_s3_bucket_acl" "artifacts" {
bucket = aws_s3_bucket.artifacts.id
acl = "public-read"
}
resource "aws_s3_bucket_public_access_block" "artifacts" {
bucket = aws_s3_bucket.artifacts.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}
resource "aws_s3_bucket_public_access_block" "artifacts" {
bucket = aws_s3_bucket.artifacts.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
🧰 Tools
🪛 Trivy (0.69.3)

[error] 27-27: S3 Access block should block public ACL

Public access block does not block public ACLs

Rule: AWS-0086

Resource: aws_s3_bucket_public_access_block.artifacts

Learn more

(IaC/AWS)


[error] 28-28: S3 Access block should block public policy

Public access block does not block public policies

Rule: AWS-0087

Resource: aws_s3_bucket_public_access_block.artifacts

Learn more

(IaC/AWS)


[error] 29-29: S3 Access Block should Ignore Public ACL

Public access block does not ignore public ACLs

Rule: AWS-0091

Resource: aws_s3_bucket_public_access_block.artifacts

Learn more

(IaC/AWS)


[error] 21-21: S3 Buckets not publicly accessible through ACL.

Bucket has a public ACL: "public-read"

Rule: AWS-0092

Resource: aws_s3_bucket_acl.artifacts

Learn more

(IaC/AWS)


[error] 30-30: S3 Access block should restrict public bucket to limit access

Public access block does not restrict public buckets

Rule: AWS-0093

Resource: aws_s3_bucket_public_access_block.artifacts

Learn more

(IaC/AWS)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/main.tf` around lines 19 - 31, The S3 bucket is currently exposed via
aws_s3_bucket_acl.artifacts (acl = "public-read") and
aws_s3_bucket_public_access_block.artifacts (all public blocks disabled); remove
or change the public-read ACL and re-enable public access protections by setting
block_public_acls, block_public_policy, ignore_public_acls, and
restrict_public_buckets to true on aws_s3_bucket_public_access_block.artifacts,
or remove aws_s3_bucket_acl.artifacts entirely and instead configure a secure
access pattern (e.g., serve via CloudFront with Origin Access Control or use a
scoped bucket policy for specific objects) referencing aws_s3_bucket.artifacts
as the bucket target.


resource "aws_s3_bucket" "logs" {
bucket = "demoapp-logs-prod"
}

resource "aws_s3_bucket_versioning" "logs" {
bucket = aws_s3_bucket.logs.id

versioning_configuration {
status = "Disabled"
}
}
50 changes: 50 additions & 0 deletions trivy/network.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
resource "aws_security_group" "web" {
name = "demoapp-web"
description = "Public web tier security group"
vpc_id = "vpc-0123456789abcdef0"

ingress {
description = "SSH from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "RDP from anywhere"
from_port = 3389
to_port = 3389
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "All TCP"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
Comment on lines +6 to +28
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Security group allows unrestricted inbound access from the internet.

The ingress rules permit:

  • SSH (22) from 0.0.0.0/0
  • RDP (3389) from 0.0.0.0/0
  • All TCP ports (0-65535) from 0.0.0.0/0

This effectively makes any resource using this security group fully accessible from the internet. Restrict CIDR blocks to known IP ranges or use VPN/bastion access patterns.

🔒 Proposed fix with restricted access
   ingress {
     description = "SSH from anywhere"
     from_port   = 22
     to_port     = 22
     protocol    = "tcp"
-    cidr_blocks = ["0.0.0.0/0"]
+    cidr_blocks = ["10.0.0.0/8"]  # Internal network only
   }

-  ingress {
-    description = "RDP from anywhere"
-    from_port   = 3389
-    to_port     = 3389
-    protocol    = "tcp"
-    cidr_blocks = ["0.0.0.0/0"]
-  }
-
-  ingress {
-    description = "All TCP"
-    from_port   = 0
-    to_port     = 65535
-    protocol    = "tcp"
-    cidr_blocks = ["0.0.0.0/0"]
-  }
+  # Remove overly permissive rules; add specific ports as needed
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ingress {
description = "SSH from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "RDP from anywhere"
from_port = 3389
to_port = 3389
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "All TCP"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"] # Internal network only
}
# Remove overly permissive rules; add specific ports as needed
🧰 Tools
🪛 Trivy (0.69.3)

[error] 11-11: Security groups should not allow unrestricted ingress to SSH or RDP from any IP address.

Security group rule allows unrestricted ingress from any IP address.

Rule: AWS-0107

Resource: aws_security_group.web

Learn more

(IaC/AWS)


[error] 19-19: Security groups should not allow unrestricted ingress to SSH or RDP from any IP address.

Security group rule allows unrestricted ingress from any IP address.

Rule: AWS-0107

Resource: aws_security_group.web

Learn more

(IaC/AWS)


[error] 27-27: Security groups should not allow unrestricted ingress to SSH or RDP from any IP address.

Security group rule allows unrestricted ingress from any IP address.

Rule: AWS-0107

Resource: aws_security_group.web

Learn more

(IaC/AWS)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/network.tf` around lines 6 - 28, The security group ingress blocks
currently allow unrestricted access; update the three ingress entries (the SSH
ingress for port 22, the RDP ingress for port 3389, and the “All TCP” ingress
from_port 0 to to_port 65535) to restrict CIDR blocks to known admin IP ranges
or internal CIDRs, remove or narrow the “All TCP” rule, and instead expose
management ports via a bastion host or VPN/security-group-only access (e.g.,
reference the bastion SG or an admin CIDR) so that SSH/RDP are only reachable
from trusted sources rather than 0.0.0.0/0.


egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_db_instance" "primary" {
identifier = "demoapp-primary"
engine = "postgres"
engine_version = "14.7"
instance_class = "db.t3.medium"
allocated_storage = 20
username = "demoapp"
password = "Sup3rS3cr3tP@ssword"
publicly_accessible = true
storage_encrypted = false
skip_final_snapshot = true
vpc_security_group_ids = [aws_security_group.web.id]
}
Comment on lines +38 to +50
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: RDS instance is publicly accessible with unencrypted storage and hardcoded credentials.

Multiple security issues:

  1. publicly_accessible = true exposes the database endpoint to the internet
  2. storage_encrypted = false leaves data-at-rest unprotected
  3. Password hardcoded in Terraform (also duplicated in terraform.tfvars)

Combined with the open security group, this database is directly attackable from the internet.

🔐 Proposed secure RDS configuration
 resource "aws_db_instance" "primary" {
   identifier             = "demoapp-primary"
   engine                 = "postgres"
   engine_version         = "14.7"
   instance_class         = "db.t3.medium"
   allocated_storage      = 20
-  username               = "demoapp"
-  password               = "Sup3rS3cr3tP@ssword"
-  publicly_accessible    = true
-  storage_encrypted      = false
+  username               = var.db_username
+  manage_master_user_password = true  # Uses Secrets Manager
+  publicly_accessible    = false
+  storage_encrypted      = true
+  kms_key_id             = aws_kms_key.rds.arn
   skip_final_snapshot    = true
   vpc_security_group_ids = [aws_security_group.web.id]
+  db_subnet_group_name   = aws_db_subnet_group.private.name
 }
🧰 Tools
🪛 Trivy (0.69.3)

[error] 47-47: RDS encryption has not been enabled at a DB Instance level.

Instance does not have storage encryption enabled.

Rule: AWS-0080

Resource: aws_db_instance.primary

Learn more

(IaC/AWS)


[error] 46-46: RDS Publicly Accessible

Instance has Public Access enabled

Rule: AWS-0180

Resource: aws_db_instance.primary

Learn more

(IaC/AWS)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/network.tf` around lines 38 - 50, The aws_db_instance resource
(aws_db_instance.primary) is insecure: set publicly_accessible = false,
storage_encrypted = true, and stop hardcoding the password; replace the literal
password with a reference to a secret or variable (e.g., use a Secrets Manager
data/resource or var.db_password) and remove any duplicate plaintext in
terraform.tfvars; also ensure vpc_security_group_ids points to a private/limited
security group (not an open web SG) and consider enabling snapshot retention
(skip_final_snapshot = false) as appropriate for safe teardown.

22 changes: 22 additions & 0 deletions trivy/secrets.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
provider "aws" {
alias = "deploy"
region = "us-west-2"
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
Comment on lines +1 to +6
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: AWS credentials hardcoded in provider configuration.

Embedding access_key and secret_key directly in Terraform files exposes credentials in version control and state files. These are the same canonical example keys appearing in Dockerfile.legacy.

AWS credentials should be provided via:

  • Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
  • Shared credentials file (~/.aws/credentials)
  • IAM instance profile / IRSA / OIDC federation
  • AWS Secrets Manager with aws_secretsmanager_secret_version data source
🔐 Proposed fix removing hardcoded credentials
 provider "aws" {
   alias  = "deploy"
   region = "us-west-2"
-  access_key = "AKIAIOSFODNN7EXAMPLE"
-  secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
+  # Credentials provided via environment or instance profile
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
provider "aws" {
alias = "deploy"
region = "us-west-2"
access_key = "AKIAIOSFODNN7EXAMPLE"
secret_key = "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}
provider "aws" {
alias = "deploy"
region = "us-west-2"
# Credentials provided via environment or instance profile
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/secrets.tf` around lines 1 - 6, The AWS provider block currently
hardcodes credentials (provider "aws" with alias "deploy" and fields
access_key/secret_key); remove the access_key and secret_key attributes and
instead configure the provider to use external credentials (env vars, shared
credentials file, or instance role). Update provider "aws" (alias "deploy") to
rely on the default credential chain and, if needed, add documentation or
variables pointing to AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY, a shared
credentials profile, or an aws_secretsmanager_secret_version data source so
secrets are not stored in the Terraform file or state.


resource "aws_ssm_parameter" "datadog_key" {
name = "/demoapp/observability/datadog_api_key"
type = "String"
value = "1234567890abcdef1234567890abcdef"
}

resource "aws_ssm_parameter" "internal_token" {
name = "/demoapp/internal/service_token"
type = "String"
value = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
}
Comment on lines +8 to +18
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SSM parameters use String type instead of SecureString for secrets.

Storing API keys and tokens with type = "String" means values are stored and displayed in plaintext. Use type = "SecureString" for automatic KMS encryption.

🔐 Proposed fix using SecureString
 resource "aws_ssm_parameter" "datadog_key" {
   name  = "/demoapp/observability/datadog_api_key"
-  type  = "String"
-  value = "1234567890abcdef1234567890abcdef"
+  type  = "SecureString"
+  value = var.datadog_api_key  # Pass via TF_VAR or secrets manager
 }
 
 resource "aws_ssm_parameter" "internal_token" {
   name  = "/demoapp/internal/service_token"
-  type  = "String"
-  value = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
+  type  = "SecureString"
+  value = var.internal_token  # Pass via TF_VAR or secrets manager
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/secrets.tf` around lines 8 - 18, Change the two SSM parameters defined
by aws_ssm_parameter.datadog_key and aws_ssm_parameter.internal_token from type
= "String" to type = "SecureString" so the values are stored encrypted; also set
or accept a kms_key_id (or use the default AWS managed key) when creating these
resources to ensure KMS encryption is applied and update any docs/variables that
supply the plaintext values accordingly.


resource "aws_db_instance_password" "fallback" {
password = "Sup3rS3cr3tP@ssword"
}
Comment on lines +20 to +22
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Terraform AWS provider aws_db_instance_password resource

💡 Result:

There is no "aws_db_instance_password" resource in the Terraform AWS provider. Extensive searches across official Terraform Registry documentation, GitHub issues, and related resources show no evidence of such a resource existing as of 2026-04-25. Password management for AWS RDS DB instances (aws_db_instance) is handled directly within that resource via attributes like: - password: Plain-text password (stored in state, sensitive). - password_wo: Write-only password (Terraform 1.11+, not stored in state). - manage_master_user_password: Integrates with AWS Secrets Manager for managed passwords (recommended for security). Examples from official docs: resource "aws_db_instance" "example" { username = "foo" password = "foobarbaz" # Or use password_wo or manage_master_user_password # ... } For secure practices, use random_password resource or Secrets Manager integration. No separate resource for just the password exists; it's part of aws_db_instance or aws_rds_cluster.

Citations:


Remove aws_db_instance_password resource—this resource does not exist in the Terraform AWS provider.

Terraform will fail during plan or apply. For RDS password management, use aws_db_instance directly with password, password_wo, or manage_master_user_password = true (for Secrets Manager integration).

Current code
resource "aws_db_instance_password" "fallback" {
  password = "Sup3rS3cr3tP@ssword"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/secrets.tf` around lines 20 - 22, The resource aws_db_instance_password
"fallback" is invalid in the AWS Terraform provider; remove that resource and
instead set the DB password on an aws_db_instance resource (e.g., add or update
the password attribute on your aws_db_instance resource) or enable
manage_master_user_password = true to integrate with Secrets Manager and avoid
hardcoding; replace references to aws_db_instance_password.fallback with the
aws_db_instance's password handling (or reference the Secrets Manager secret) so
Terraform plan/apply will succeed.

6 changes: 6 additions & 0 deletions trivy/terraform.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
environment = "production"
region = "us-east-1"
db_username = "demoapp"
db_password = "Sup3rS3cr3tP@ssword"
admin_api_token = "internal_token_2c8b41d9c0a64e1e9b0f3e7a1d5c8b41"
private_key_pem = "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAyqXmSVk3...truncated...AAAA\n-----END RSA PRIVATE KEY-----"
Comment on lines +1 to +6
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Variables defined in tfvars but not declared or consumed.

Per the related context, network.tf hardcodes username and password directly instead of referencing var.db_username and var.db_password. Similarly, main.tf hardcodes the region. Without corresponding variable blocks in .tf files, these tfvars values are unused.

Either remove unused tfvars entries or add proper variable declarations and references.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/terraform.tfvars` around lines 1 - 6, The tfvars entries (db_username,
db_password, region, admin_api_token, private_key_pem) are not used because
Terraform files hardcode values; fix by adding variable declarations and
replacing hardcoded literals with variable references: create variable blocks
named "db_username", "db_password", "region", "admin_api_token", and
"private_key_pem" and update network.tf to use var.db_username and
var.db_password instead of hardcoded username/password and update main.tf to use
var.region; if admin_api_token and private_key_pem are not needed, remove them
from terraform.tfvars instead of adding variables.

Comment on lines +4 to +6
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Plaintext secrets committed to version control.

This file contains plaintext database password, API token, and RSA private key. Once committed, these secrets persist in git history even if removed later.

Production secrets should be:

  • Stored in a secrets manager (AWS Secrets Manager, HashiCorp Vault)
  • Referenced via data sources or environment variables
  • Never committed to version control
🔐 Recommended approach using environment variables or data sources
# variables.tf
variable "db_password" {
  type      = string
  sensitive = true
}

variable "admin_api_token" {
  type      = string
  sensitive = true
}

# Then set via environment: TF_VAR_db_password, TF_VAR_admin_api_token
# Or use data sources:
data "aws_secretsmanager_secret_version" "db_password" {
  secret_id = "demoapp/db/password"
}

Remove terraform.tfvars from version control and add to .gitignore.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@trivy/terraform.tfvars` around lines 4 - 6, The committed terraform.tfvars
contains plaintext secrets (db_password, admin_api_token, private_key_pem);
remove these literal values from the file and from version control history, add
terraform.tfvars to .gitignore, and replace them by referencing secure sources:
create sensitive variables in variables.tf (e.g., db_password, admin_api_token,
private_key_pem with sensitive = true) and load their values via TF_VAR env vars
or a secrets data source (e.g., aws_secretsmanager_secret_version) in your
Terraform config; rotate the exposed credentials and purge them from Git history
after migrating to the secrets manager.