I’ve been working on getting a Terraform Lambda with API Gateway and Cloudfront solution…..
“out there :)”
Github terraform-lambda-api-gateway-cloudfront
Terraform:
terraform plan
terraform apply
terraform destroy
Terraform
I created an output, with details of API gateway, and Cloudfront endpoints:
Outputs:
api_gateway_base_url = "https://?????????.execute-api.eu-west-1.amazonaws.com/serverless_lambda_stage"
cloudfront_domain_name = "?????????.cloudfront.net"
function_name = "LambdaHelloWorld"
The Terraform apply, showing the invocation, and output:
$ terraform apply
data.archive_file.lambda_hello_world: Read complete after 0s [id=09af68f7ed976c5a14f035dc943f9283a1635106]
aws_iam_role.lambda_exec: Refreshing state... [id=serverless_lambda]
aws_apigatewayv2_api.lambda: Refreshing state... [id=?????????]
aws_cloudwatch_log_group.api_gw: Refreshing state... [id=/aws/api_gw/serverless_lambda_gw]
aws_cloudfront_distribution.hello_world: Refreshing state... [id=E1HUWRD4F8Y21K]
aws_apigatewayv2_stage.lambda: Refreshing state... [id=serverless_lambda_stage]
aws_iam_role_policy_attachment.lambda_policy: Refreshing state... [id=serverless_lambda-20240201200145515300000002]
aws_lambda_function.lambda_hello_world: Refreshing state... [id=LambdaHelloWorld]
aws_lambda_permission.api_gw: Refreshing state... [id=AllowExecutionFromAPIGateway]
aws_cloudwatch_log_group.hello_world: Refreshing state... [id=/aws/lambda/LambdaHelloWorld]
aws_apigatewayv2_integration.hello_world: Refreshing state... [id=nya2ry1]
aws_apigatewayv2_route.hello_world_options: Refreshing state... [id=6swtye3]
aws_apigatewayv2_route.hello_world: Refreshing state... [id=f04e338]
To execute the Lambda via Cloudfront:
$ curl -v https://?????????.cloudfront.net/serverless_lambda_stage/hello
* Trying ????????????????...
* Connected to ?????????.cloudfront.net (?????????????????) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
[snip]
{"message":"Hello, World!"}
To execute the Lambda via API Gateway:
$ curl -v https://?????????.execute-api.eu-west-1.amazonaws.com/serverless_lambda_stage/hello
* Trying ????????????????...
* Connected to ?????????.execute-api.eu-west-1.amazonaws.com (????????????????) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
[snip]
{"message":"Hello, World!"}
The name of the API Gateway “name” is serverless_lambda_stage
The aws_apigatewayv2_route “route_key” is hello
Don’t forget terraform destroy
- so you don’t incur additional AWS cost.
SLF4J: No SLF4J providers were found.
SLF4J: Defaulting to no-operation (NOP) logger implementation
This was mid-coding, right when I needed logs to “display” :)
This will also prevent the error:
java.lang.NoSuchMethodError: 'java.lang.ClassLoader ch.qos.logback.core.util.Loader.systemClassloaderIfNull(java.lang.ClassLoader)
I added the following to my pom.xml:
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.4.13</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.4.13</version>
</dependency>
</dependency>
There are number of Terraform Lambda tutorials which say you need an S3 bucket (for the Layer).
You don’t.
I recently implemented a POC, which had one job. To implement a Lambda Layer for some simple Node code, and wrap in Terraform.
Terraform and node code (github)
Create and zip source and dependencies:
cd layers/date-fns/nodejs
npm init -y
npm install date-fns@2.24.0
zip -r9 date-fns-layer.zip .
Zip the Lambda code:
zip -r9 lambda.zip index.js
Terraform:
terraform plan
terraform apply
terraform destroy
Terraform
The Layer:
resource "aws_lambda_layer_version" "simple_node_lambda-layer" {
filename = "layers/date-fns/date-fns-layer.zip"
layer_name = "simple_node_lambda-layer"
source_code_hash = "${filebase64sha256("layers/date-fns/date-fns-layer.zip")}"
compatible_runtimes = ["nodejs14.x"]
}
The Lambda references the Layer:
resource "aws_lambda_function" "simple_node_lambda_function" {
filename = "lambda.zip"
function_name = "SimpleNodeLambdaFunction"
role = aws_iam_role.simple-node-lambda_role.arn
handler = "index.handler"
source_code_hash = filebase64sha256("lambda.zip")
runtime = "nodejs14.x"
layers = [aws_lambda_layer_version.simple_node_lambda-layer.arn]
}
I found a useful terraform module, aws_lambda_invocation:
resource "aws_lambda_invocation" "lambda_invocation" {
function_name = "SimpleNodeLambdaFunction"
input = jsonencode({
hello = "world"
})
triggers = {
redeployment = timestamp()
}
depends_on = [
aws_lambda_function.simple_node_lambda_function
]
}
I created an output, which outputs the Lambda invocation:
output "lambda_invoke" {
description = "Result of Lambda invocation"
value = aws_lambda_invocation.lambda_invocation.result
}
The Terraform apply, showing the invocation, and output:
aws_iam_role.simple-node-lambda_role: Creating...
aws_cloudwatch_log_group.main: Creating...
aws_lambda_layer_version.simple_node_lambda-layer: Creating...
aws_cloudwatch_log_group.main: Creation complete after 0s [id=SimpleNodeLambdaFunction]
aws_iam_role.simple-node-lambda_role: Creation complete after 1s [id=simple-node_role]
aws_iam_role_policy.lambda_basic_policy: Creating...
aws_iam_role_policy.lambda_basic_policy: Creation complete after 0s [id=simple-node_role:lambda_simple_node_basic_policy]
aws_lambda_layer_version.simple_node_lambda-layer: Still creating... [10s elapsed]
aws_lambda_layer_version.simple_node_lambda-layer: Creation complete after 15s [id=arn:aws:lambda:eu-west-1:457954557100:layer:simple_node_lambda-layer:7]
aws_lambda_function.simple_node_lambda_function: Creating...
aws_lambda_function.simple_node_lambda_function: Creation complete after 9s [id=SimpleNodeLambdaFunction]
aws_lambda_invocation.lambda_invocation: Creating...
aws_lambda_invocation.lambda_invocation: Creation complete after 1s [id=SimpleNodeLambdaFunction_$LATEST_fbc24bcc7a1794758fc1327fcfebdaf6]
Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Outputs:
lambda_invoke = "{\"statusCode\":200,\"body\":\"{\\\"today\\\":\\\"👉️ Today is a Sunday\\\"}\"}"
Terraform (continued):
IAM role:
resource "aws_iam_role" "simple-node-lambda_role" {
name = "simple-node_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
IAM Role Policy (for cloudwatch logs:)
resource "aws_iam_role_policy" "lambda_basic_policy" {
name = "lambda_simple_node_basic_policy"
role = aws_iam_role.simple-node-lambda_role.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
}
resource "aws_cloudwatch_log_group" "main" {
name = "SimpleNodeLambdaFunction"
retention_in_days = 1
}
Volunteering is great, but due diligence is important…
Unsurprisingly, everyone is after your time.
Over the last fifteen years (in the UK), numerous “opportunities” have been replaced with volunteers.
I’m an advocate of volunteering. I used to volunteer for BBC Careers events, Chesham community helping-hands during covid, and intend to use my time to volunteer for the Institute of Engineering and Technology
But, it’s important to carry out due-diligence and investigation before you take on any volunteer opportunity.
Unscrupulous people would love nothing better than having someone coding for their app, project, club, organisation.
Guess what, you might suddenly have a backlog, tasks, diary, project-plan and be browbeaten by other volunteers. Be careful of committees, large whatsApp groups, and retired club-secretaries.
Remember your time commitment, boundaries, and sense-of-humour.
Do some investigation.
Take a look at linkedin profiles, github repos, websites, the individuals already on the project, venture capital raised, and paid employees who might appear to be volunteers.
It’s surprising what you can find at companies house.
Be completely honest about the amount of time you can spend on the project. Create a boundary, and don’t over-commit. Some volunteers are lovely people. Others are not.
Ask questions.
Describe what you would like to get out of the volunteering experience. This should also be the basis of your boundary.
Other volunteers should respect these.
You should not feel pressure to fix x,y,and z; delaying others, shipping deadlines, oh dear the cloud infra has gone bang…..
You are not a machine. This is a volunteer position, which should be fun**
If it doesn’t feel right, walk away.
Any self-respecting individual, club, project, app, organisation, will understand. If there is unnecessary drama, be stoic and discreetly walk away, you did the right thing.
In my experience, local grassroots volunteering is the best. If something doesn’t already exist, set up something yourself.
** This is important.
]]>An hour spent learning Terraform
$ terraform show
# aws_s3_bucket.default:
resource "aws_s3_bucket" "default" {
# aws_s3_object.default:
resource "aws_s3_object" "default" {
bucket = "myapp-milesd-chocksaway123"
}
Outputs:
aws_s3_bucket = "myapp-milesd-chocksaway123"
$ terraform destroy
aws_s3_bucket.default: Refreshing state... [id=myapp-milesd-chocksaway123]
aws_s3_object.default: Refreshing state... [id=beanstalk/myapp]
Plan: 0 to add, 0 to change, 2 to destroy.
Destroy complete! Resources: 2 destroyed.
$
$ aws s3api list-buckets
{
"Buckets": []
}
$
I’ve recently been converting a SAM script to Terraform
I’ve been using tutorials, including https://developer.hashicorp.com/terraform/tutorials/aws/aws-iam-policy which allows you to assign an IAM policy to an S3 bucket (aws_s3_bucket). I’ve been successfully creating S3 buckets with policies :)
But I started getting an error when I (deliberately) went through the Hashicorp aws-iam-policy tutorial (as a reference point).
Running through the tutorial (the main.tf is below), and doing a “terraform apply”:
$ terraform apply
[snip]
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 2s [id=milesd-checker-bucket]
data.aws_iam_policy_document.example: Reading...
aws_s3_bucket_acl.bucket: Creating...
data.aws_iam_policy_document.example: Read complete after 0s [id=***********]
aws_iam_policy.policy: Creating...
aws_iam_policy.policy: Creation complete after 2s [id=arn:aws:iam::**************:policy/milesd-checker-bucket-policy]
╷
│ Error: error creating S3 bucket ACL for milesd-checker-bucket: AccessControlListNotSupported: The bucket does not allow ACLs
│ status code: 400, request id: *******************, host id: **********************
│
│ with aws_s3_bucket_acl.bucket,
│ on main.tf line 19, in resource "aws_s3_bucket_acl" "bucket":
│ 19: resource "aws_s3_bucket_acl" "bucket" {
│
╵
The bucket does not allow ACLs
I’ve done some investigation and have found https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/issues/223
Specifically “AWS announced in December for this month (April 2023) wherein S3 buckets would have ACls disabled by default:” (https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-s3-automatically-enable-block-public-access-disable-access-control-lists-buckets-april-2023/).
As soon as I find a fix, I will post an update.
Here is my main.tf:
provider "aws" {
region = var.region
default_tags {
tags = {
Hashicorp-Learn = "aws-iam-policy"
}
}
}
resource "aws_s3_bucket" "bucket" {
bucket = "milesd-checker-bucket"
}
resource "aws_s3_bucket_acl" "bucket" {
bucket = aws_s3_bucket.bucket.id
acl = "private"
}
data "aws_iam_policy_document" "example" {
statement {
actions = ["s3:ListAllMyBuckets"]
resources = ["arn:aws:s3:::*"]
effect = "Allow"
}
statement {
actions = ["s3:*"]
resources = [aws_s3_bucket.bucket.arn]
effect = "Allow"
}
}
resource "aws_iam_policy" "policy" {
name = "${aws_s3_bucket.bucket.id}-policy"
description = "My test policy"
policy = data.aws_iam_policy_document.example.json
}
Over the Christmas break I have been doing lots of thinking about content, publishing, and real-time events.
I have a fascination about processing events in real-time, from devices which are not always-on (connected).
Imager is a side-project where content is worked on locally, events are sent to a queue, and a (remote) Decider acts on those events.
The use-case for this side-project is a content management system, which is not served by a dynamic-content back-end server.
The Decider acts as a workflow engine. It will process instructions based on the event.
I want the chocksaway.com blog to be powered by Imager, and use Micronaut as a sort of local server, with a remote Decider workflow, which checks for new (static) content.
When a publish event is in the queue, the Decider acts on it. For example pulling static HTML content (from an Imager endpoint), and saving it locally.
This would update the blog.
Why does Imager exist?
Security and cost. I want to develop something which does not require a large overhead. A Decider can run on a schedule, and only process specific events.
The blog use-case keeps content static, reducing the attack surface.
]]>I’ve been delving into distroless. I’ve refactored a Dropwizard example to use JKube.
It’s a basic Dropwizard application, with its own Dockerfile. I’ve used minikube for my local Kubernetes environment.
A quick summary:
For using jkube, in the Dockerfile there is a maven/ prefix. This is specific to jkube. For example:
COPY maven/target/dropwizard-docker-jkube.jar ${APP_HOME}/bin
Most of the content is taken from the dropwizard-docker-jkube github repo README.md.
The chocksaway/dropwizard-docker-jkube dockerhub repository.
This is an example Dropwizard app using Java.
Ensure minikube is running:
minikube start --driver=docker
Ensure you have Java 8 or later.
./generate-keystore.sh
mvn clean package
java -jar target/dropwizard-docker-jkube.jar server
Building docker image:
$ mvn package k8s:build
[snip]
[INFO] --- kubernetes-maven-plugin:1.10.0:build (default-cli) @ dropwizard-docker-jkube ---
[INFO] k8s: Building Docker image in Kubernetes mode
[INFO] k8s: Using Dockerfile: /home/milesd/workspace/dropwizard-docker-jkube/Dockerfile
[INFO] k8s: Using Docker Context Directory: /home/milesd/workspace/dropwizard-docker-jkube
[INFO] k8s: [docker.io/chocksaway/dropwizard-docker-jkube:latest]: Created docker-build.tar in 144 milliseconds
[INFO] k8s: [docker.io/chocksaway/dropwizard-docker-jkube:latest]: Built image sha256:157f0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.750 s
[INFO] Finished at: 2022-11-15T17:58:37Z
List docker images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
chocksaway/dropwizard-docker-jkube latest b838b44d27b9 10 minutes ago 438MB
Login into docker hub (or equivalent):
docker login
Push docker image:
$ docker push chocksaway/dropwizard-docker-jkube:latest
The push refers to repository [docker.io/chocksaway/dropwizard-docker-jkube]
b0ddc0dba107: Pushed
83a46287b450: Pushed
05a66a62d154: Pushed
697bf0c6d15e: Pushed
276a07e9d4e7: Pushed
05ccff4e0d22: Pushed
3c7c9248454d: Pushed
d2cd905c205e: Pushed
2110602b3735: Pushed
bb363ff790df: Pushed
dc10ed5dc4e8: Layer already exists
491cc2011e51: Layer already exists
0ad3ddf4a4ce: Layer already exists
latest: digest: sha256:8808f589070c2bcb75a019577cc94b81aaaa size: 3035
Deploy to Kubernetes:
$ mvn k8s:resource k8s:apply
[INFO] Scanning for projects...
[INFO]
[INFO] ---------------< com.chocksaway:dropwizard-docker-jkube >---------------
[INFO] Building dropwizard-docker-jkube 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- kubernetes-maven-plugin:1.10.0:resource (default-cli) @ dropwizard-docker-jkube ---
[INFO] k8s: Using Dockerfile: /home/milesd/workspace/dropwizard-docker-jkube/Dockerfile
[INFO] k8s: Using Docker Context Directory: /home/milesd/workspace/dropwizard-docker-jkube
[INFO] k8s: Using resource templates from /home/milesd/workspace/dropwizard-docker-jkube/src/main/jkube
[INFO] k8s: jkube-controller: Adding a default Deployment
[INFO] k8s: jkube-service: Adding a default service 'dropwizard-docker-jkube' with ports [8443]
[INFO] k8s: jkube-service-discovery: Using first mentioned service port '8443'
[INFO] k8s: jkube-revision-history: Adding revision history limit to 2
[INFO] k8s: validating /home/milesd/workspace/dropwizard-docker-jkube/target/classes/META-INF/jkube/kubernetes/dropwizard-docker-jkube-deployment.yml resource
[WARNING] Unknown keyword $module - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
[WARNING] Unknown keyword existingJavaType - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
[WARNING] Unknown keyword javaOmitEmpty - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
[INFO] k8s: validating /home/milesd/workspace/dropwizard-docker-jkube/target/classes/META-INF/jkube/kubernetes/dropwizard-docker-jkube-service.yml resource
[INFO]
[INFO] --- kubernetes-maven-plugin:1.10.0:apply (default-cli) @ dropwizard-docker-jkube ---
[INFO] k8s: Using Kubernetes at https://xxx.xxx.xxx.xxx:8443/ in namespace null with manifest /home/milesd/workspace/dropwizard-docker-jkube/target/classes/META-INF/jkube/kubernetes.yml
[INFO] k8s: Updating Service from kubernetes.yml
[INFO] k8s: Updated Service: target/jkube/applyJson/default/service-dropwizard-docker-jkube-2.json
[INFO] k8s: Updating Deployment from kubernetes.yml
[INFO] k8s: Updated Deployment: target/jkube/applyJson/default/deployment-dropwizard-docker-jkube-2.json
[INFO] k8s: HINT: Use the command `kubectl get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.711 s
[INFO] Finished at: 2022-11-15T18:03:04Z
[INFO] ------------------------------------------------------------------------
Check running pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
docker-file-simple-74ccd899f6-hc654 1/1 Running 0 131m
dropwizard-docker-jkube-c7cf8659c-982cb 1/1 Running 0 107s
dropwizard-java-example-785bd6f74-mfkbf 1/1 Running 0 79m
Check start-up logs:
$ kubectl logs dropwizard-docker-jkube-c7cf8659c-982cb
INFO [2022-11-15 18:02:42,560] org.eclipse.jetty.util.log: Logging initialized @607ms to org.eclipse.jetty.util.log.Slf4jLog
INFO [2022-11-15 18:02:42,587] io.dropwizard.server.DefaultServerFactory: Registering jersey handler with root path prefix: /
INFO [2022-11-15 18:02:42,587] io.dropwizard.server.DefaultServerFactory: Registering admin handler with root path prefix: /
INFO [2022-11-15 18:02:42,611] io.dropwizard.server.ServerFactory: Starting App
.___ .__ .___
__| _/______ ____ ________ _ _|__|____________ _______ __| _/
/ __ |\_ __ \/ _ \\____ \ \/ \/ / \___ /\__ \\_ __ \/ __ |
/ /_/ | | | \( <_> ) |_> > /| |/ / / __ \| | \/ /_/ |
\____ | |__| \____/| __/ \/\_/ |__/_____ \(____ /__| \____ |
\/ |__| \/ \/ \/
INFO [2022-11-15 18:02:42,671] org.eclipse.jetty.setuid.SetUIDListener: Opened application@4dbad37{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
INFO [2022-11-15 18:02:42,671] org.eclipse.jetty.setuid.SetUIDListener: Opened application@7b4acdc2{SSL, (ssl, http/1.1)}{0.0.0.0:8443}
INFO [2022-11-15 18:02:42,671] org.eclipse.jetty.setuid.SetUIDListener: Opened admin@26a262d6{HTTP/1.1, (http/1.1)}{0.0.0.0:8081}
INFO [2022-11-15 18:02:42,672] org.eclipse.jetty.server.Server: jetty-9.4.49.v20220914; built: 2022-09-14T01:07:36.601Z; git: 4231a3b2e4cb8548a412a789936d640a97b1aa0a; jvm 17.0.2+8-86
INFO [2022-11-15 18:02:42,794] org.eclipse.jetty.util.ssl.SslContextFactory: x509=X509@249e0271(dropwizard-java-example,h=[dropwizard-docker-jkube],a=[],w=[]) for Server@4893b344[provider=null,keyStore=file:///app/keystore.pfx,trustStore=null]
INFO [2022-11-15 18:02:42,840] io.dropwizard.jetty.HttpsConnectorFactory: Enabled protocols: [TLSv1.2]
INFO [2022-11-15 18:02:42,840] io.dropwizard.jetty.HttpsConnectorFactory: Disabled protocols: [SSLv2Hello, SSLv3, TLSv1, TLSv1.1, TLSv1.3]
INFO [2022-11-15 18:02:42,840] io.dropwizard.jetty.HttpsConnectorFactory: Enabled cipher suites: [TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
INFO [2022-11-15 18:02:42,840] io.dropwizard.jetty.HttpsConnectorFactory: Disabled cipher suites: [TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_256_GCM_SHA384]
INFO [2022-11-15 18:02:43,006] io.dropwizard.jersey.DropwizardResourceConfig: The following paths were found for the configured resources:
GET / (com.chocksaway.controller.RootResource)
INFO [2022-11-15 18:02:43,007] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@4c1d59cd{/,null,AVAILABLE}
INFO [2022-11-15 18:02:43,008] io.dropwizard.setup.AdminEnvironment: tasks =
POST /tasks/log-level (io.dropwizard.servlets.tasks.LogConfigurationTask)
POST /tasks/gc (io.dropwizard.servlets.tasks.GarbageCollectionTask)
INFO [2022-11-15 18:02:43,009] org.eclipse.jetty.server.handler.ContextHandler: Started i.d.j.MutableServletContextHandler@65e22def{/,null,AVAILABLE}
INFO [2022-11-15 18:02:43,012] org.eclipse.jetty.server.AbstractConnector: Started application@4dbad37{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
INFO [2022-11-15 18:02:43,012] org.eclipse.jetty.server.AbstractConnector: Started application@7b4acdc2{SSL, (ssl, http/1.1)}{0.0.0.0:8443}
INFO [2022-11-15 18:02:43,012] org.eclipse.jetty.server.AbstractConnector: Started admin@26a262d6{HTTP/1.1, (http/1.1)}{0.0.0.0:8081}
INFO [2022-11-15 18:02:43,012] org.eclipse.jetty.server.Server: Started @1060ms
kubectl get pods
kubectl delete pod pod-name
kubectl logs pod-name
kubectl get pod -o wide
Ensure you have a working Docker environment.
make dist image run
Point your browser to http://localhost:8080
or use curl
in command line.
curl -v http://localhost:8080/
curl -v -k https://localhost:8443/
Operational menu endpoint:
http://localhost:8081
I came across a strange situation where I would call a @Get endpoint, which had a variable of @PathVariable ObjectId, but got a bad request (from Postman).
I found another project (https://github.com/hantsy/micronaut-sandbox/tree/master/mongodb-album-service), which allowed me to pass the @PathVariable successfully (to MongoDB). The only issue was this was using Gradle, and i’m using Maven.
I found a useful stackoverflow post, which described using the Maven-Publish plugin to convert build.gradle to pom.xml:
(https://stackoverflow.com/questions/12888490/gradle-build-gradle-to-maven-pom-xml)
Since Gradle 7, Maven-Publish tasks are automatically added to your Gradle tasks (https://docs.gradle.org/current/userguide/publishing_maven.html).
Micronaut uses sdkman (https://sdkman.io) for it’s tooling.
I managed to convert build.gradle to pom.xml, by following these steps:
(i). I installed gradle 7.4.2, by running:
sdk list gradle
sdk install gradle 7.4.2
Available Gradle Versions
================================================================================
7.5-rc-1 6.2.2 4.7 2.8
> * 7.4.2 6.2.1 4.6 2.7
(ii). I then added the following to my build.gradle:
Add maven-publish to my plugins:
plugins {
id 'maven-publish'
}
Add the following publishing configuration:
publishing {
publications {
maven(MavenPublication) {
groupId = 'edu.bbte.gradleex.mavenplugin'
artifactId = 'gradleex-mavenplugin'
version = '1.0.0-SNAPSHOT'
from components.java
}
}
}
(iii). Run the generatePomFileForMavenPublication task:
When you look the Gradle tasks, you will see (under publishing), a generatePomFileForMavenPublication:
$ gradle generatePomFileForMavenPublication
BUILD SUCCESSFUL in 554ms
1 actionable task: 1 executed
A file named ./build/publications/maven/pom-default.xml will have been generated.
(iv). I copied this file to my project root, renaming to pom.xml.
Closed the project, and created new project (using Idea), and was able to confirm that the controller @Get was now working as expected.
marvellous :)
]]>Lack of a design authority is one of the biggest maladies in (technology) engineering today
I’m in the process of (getting a) garage-workshop built. Before submitting a planning application, I drew up a design, and got a good idea of what I wanted. Compromise, limited budget, small garden, but there is a clear concise idea of what is required, the steps required to build the garage, and deliver a great workshop.
Deliver is the key.
There is no danger of an abstract heli-pad replacing it at the last moment.
I cannot believe that this situation is present in so many tech companies. People fail to define what they actually want. They will use terms like “fail fast”, “iterative prototyping”, and “abstract data model.”
Not being able to find the right people, lack of budget, lead to questionable decisions being made.
Spreading a team geographically all over the place, sub-contracting, or ….. This does not resolve the hot potato of what do you actually want? Time ticks by, the budget gets burned.
I have been faced with this situation three times professionally. I have been able to fix things twice.
Fixing involved me designing, pitching and implementing components, which were tweaked, and became the production norm. Because of the (manageable) scale, I was able to crack-on and get things done. I wonder if people realise the amount of blood, sweat, and teeth-gnashing which was involved. Fix is a highly subjective term.
The third one was huge, and involved ambiguity around the design, features, the data-model, and how, who, and what was going to be delivered. Intended cross customer, software as a service with real time, high volume, concurrent, events increased the complexity no end. This was at the exploration phase.
The lack of a rock-star CTO, team of engineers in the same location as the product team, caused a real engineering disconnect. Enough said.
Going back to my garage-workshop, I’m confident it will deliver, and be spot-on :)
You cannot boil the ocean.
]]>