A Web App Framework

This is documentation, for me essentially, on how to deploy a web application. It's opinionated and will change over time. Here are the buzz words:

  • Next.js
  • Typescript
  • Progressive Web App (PWA)
  • Go
  • Google Cloud Run (Serverless)
  • Firebase

NextJS Guide#

Setup Next.js#

Nextjs make react simple and configurable to set up. Docs are here.

Setup Typescript#

Docs are here You will want to install the types as dev dependencies: npm install --save-dev typescript @types/react.

tsconfig.json:

{
"compilerOptions": {
"target": "es5",
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": false,
"forceConsistentCasingInFileNames": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "node",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve"
},
"exclude": ["node_modules"],
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx"]
}

Setup PWA#

Progressive Web Apps. Docs are here.

const withPWA = require("next-pwa");
module.exports = withPWA({
pwa: {
dest: "public",
},
async rewrites() {
return [
{
source: "/api/:path*",
destination: "http://localhost:8000/:path*", // Proxy to Backend
},
];
},
});

The important part is the local /api redirect for developement. This redirects all /api calls to your local backend service.

Setup Tailwindcss#

Docs are here. You need to create the tailwind.config.js and postcss.config.js files.

I recommend to keep the global styles.css so you can still bundle classes together.

Setup Darkmode#

Support for dark mode is nice. next-themes provides this out of the box.

Ensure your tailwind.config.js adds support via the darkmode class:

module.exports = {
...
darkMode: "class",
...
}

Add color-scheme support to your styles.css. More info on web.dev

Firebase#

Firebase is a useful application framework. It has good support for mobile apps to. The web sdks are nice. The main benefit I see is for authentication.

API#

I'm a big fan of protobuffers and gRPC. It's well defined and simple with a lot of best practises built in. However it's typically not been usefull for any front end work as the internal protocol of gRPC is based on binary HTTP2 with trailing headers that aren't supported in the browser. Although now there are a couple options:

  • gRPC Web
  • REST Transcoding

My preferred is transcoding. It gives a nice RESTful style api that doesn't feel like a generated API.

Larking#

Larking is my transcoding project. It uses the new protobuf apiv2 to parse the http options, generating the REST bindings. The best part is that it does this on the fly. No extra protoc options or dealing with the generated runtime code. It can do this via server side reflection too, making it a proxy that adapts to the server changes without having to reboot. Although the most useful case I've had is for the server to run both the gRPC service and the http REST service.

Below is my function that serves on the listener until failure:

func (s *Server) Serve(l net.Listener) error {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
m := cmux.New(l)
grpcL := m.Match(cmux.HTTP2HeaderField("content-type", "application/grpc"))
httpL := m.Match(cmux.Any())
zgl := zapgrpc.NewLogger(s.log)
grpclog.SetLoggerV2(zgl)
gs := grpc.NewServer(grpc.UnaryInterceptor(s.unaryInterceptor))
pb.RegisterAppServer(gs, s)
longrunning.RegisterOperationsServer(gs, s)
hd := &larking.Handler{
UnaryInterceptor: s.unaryInterceptor,
}
if err := hd.RegisterServiceByName("app.api.Services", s); err != nil {
return err
}
if err := hd.RegisterServiceByName("google.longrunning.Operations", s); err != nil {
return err
}
mux := http.NewServeMux()
if s.debug {
s.log.Info("debug enabled", zap.String("path", "/debug/pprof/"))
mux.HandleFunc("/debug/pprof/", pprof.Index)
mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
}
mux.Handle("/api/", http.StripPrefix("/api/", hd)) // Firebase hosting restrictions
mux.Handle("/", hd)
hs := &http.Server{
Handler: mux,
}
errs := make(chan error, 3)
go func() { errs <- gs.Serve(grpcL) }()
defer gs.Stop()
go func() { errs <- hs.Serve(httpL) }()
defer hs.Close()
go func() { errs <- m.Serve() }()
s.log.Info("listening", zap.String("address", l.Addr().String()))
select {
case err := <-errs:
return err
case _ = <-sigChan:
return nil
}
}

Protoc#

Protoc build command:

protoc --go_out=paths=source_relative:. --go-grpc_out=paths=source_relative:. --ts_proto_out=. --ts_proto_opt=esModuleInterop=true,useDate=string apipb/*.proto

To keep the frontend in-sync with the backend I use a typescript integration from ts_proto. Docs here. One unsolved problem is the generation of calling code. I don't have autogenerated handlers for calling the API yet.

Layout#

I like flat layouts. Specially for small projects. I think it greatly simplifies it. Everything is visible, nothing is hidden away in the depths. It's realy not a big deal how things are laid out but it helps to have a structure. Web app projects are tightly integrated between the frontend and backend. The API is used to "decouple" the two, but if you want to change the API you need to ensure both still work. I've found it nice to have the backend/frontend code living close together.

.firebase
.firebaserc
.git
.gitignore
.next
README.md
apipb
├── api.pb.go
├── api.proto
├── api.ts
└── api_grpc.pb.go
$APP
├── server.go (go application files)
├── resources.go
├── resources_test.go
├── resources.tsx (javascript code resources API)
└── ...
cmd
└── appname
└── main.go
components
├── Button.tsx
└── ...
firebase.json
firestore-debug.log
firestore.rules
go.mod
go.sum
next.config.js
node_modules
package.json
pages
├── _document.tsx (html document settings)
├── _app.tsx (app settings)
├── ...
└── index.tsx (home page)
postcss.config.js
public
└── ... (images, etc)
styles
└── global.css
tailwind.config.js
tsconfig.json

Deployment#

Googles ko is great! One command deployment to gcloud run:

gcloud --project $PROJECT run deploy --image=$(KO_DOCKER_REPO=gcr.io/$PROJECT/$APP ko publish ./cmd/$APP) --platform managed $APP

I've used bazel before and wanted to reuse it in projects. However, most of the time I find it gets in the way more than anything. It doesn't play nicely with the localy tools so editors and bazel constantly rebuild your code. Protoc support works, but I've had it break constantly on multiple upgrades. Bazel server isn't light either. I'm working with an older mac that needs all the ram it can get. So for now I've been happy with ko.

For projects that require cgo I use zig. Docs here. I've then been using bazel for only building the container iamges and deploying them. This is sub optimal. So I have a new project laze! laze is a local tool friendly, build graph exectutor. It tries to be a simple version of bazel. Docs for it here. I'll do a writeup once I get some more core features added.

For the frontend:

npm run export && firebase deploy

Firebase code need to know about the proxy.

firebase.json

{
"hosting": {
"public": "out/",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**"],
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "$APP",
"region": "europe-west1"
}
},
{
"source": "/login",
"destination": "/login.html"
}
]
},
"firestore": {
"rules": "firestore.rules"
},
"emulators": {
"hosting": {
"port": 5000
},
"firestore": {
"port": "8080"
},
"ui": {
"enabled": true,
"host": "0.0.0.0",
"port": 4000
}
}
}