TLDR: I hacked mszep/pandoc_resume to make it comfortable enough to send it to potential employers. In order to update it faster, I built a Gitlab CI pipeline and served the completed resume.pdf and resume.html on my blog.

I’ve always thought writing a resume is tedious and time consuming. What content to use? What theme? Where does education go? Does it even matter? How many pages? Should it be a docx, pdf, and/or html? What keywords to use and what not to use? You get the idea.

how?

Luckily, there are a few tools to solve some these issues such as theming and multiple formats.

  • pandoc markdown resume - Uses Context LaTeX and CSS with Pandoc to generate a resume from markdown in html and pdf. Only one theme.
  • JSONResume - Uses a JSON format to render an HTML resume in over 250+ different themes.
  • markdown-cv - Markdown to generate HTML and to get PDF, you print from the browser.
  • HackMyResume - Uses both JSONResume and FRESH json formats to generate HTML and PDF
  • many more

Pandoc Markdown #

I’ve been heavily hacking mszep’s pandoc markdown resume. Some of my changes were: updating the Makefile to allow it to use new styles, made the links open in new tabs from the html resume using lua, made it process a directory of markdown instead of a single markdown file, and updated the CSS to make the html version mobile friendly.

Even as a newbie to LaTeX, I was still able to modify it a bit to get the html and pdf templates to line up.

Pipeline to build the resume #

Usually, my resume is updated as I work but it’s a pain to have to rebuild each time. Since my resume is in Gitlab and my background is in ops, I thought it would be good to take advantage of gitlab pipelines using .gitlab-ci.yml file to rebuild my resume upon a git commit.

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2

image:
  name: docker/compose:1.22.0
  entrypoint: ["/bin/sh", "-c"]

services:
  - docker:dind

before_script:
  - docker version
  - docker-compose version

build:
  stage: build
  script:
    - docker-compose up -d
  artifacts:
    paths:
      - output/resume.pdf
      - output/resume.html
      - output/*.log

The above worked to build my container but I realized that the container was taking forever to build because it was trying to install context on the fly which is a huge package. The longer my job takes to run in gitlab, the more minutes it takes, and the less I can run my jobs in the free tier.

Prebuilt Pandoc Docker container #

The best solution seemed to be to prebuild the pandoc-resume container, push it to dockerhub, and use the prebuilt container in the CI process.

docker login
docker build . -f .docker/resume.markdown
docker tag resume-make:latest drianthoderyme/pandoc-resume:latest
docker push drianthoderyme/pandoc-resume:latest

I added my own Dockerfile

FROM drianthoderyme/pandoc-resume:latest

ENV APP_NAME=resume
ENV HOME=/home/app
WORKDIR $HOME

# Copy over files
COPY . $HOME/$APP_NAME/
COPY ./Makefile $HOME/$APP_NAME/

# unfortunately due to some tmxcache issues with context, only root seemed to work
USER root
WORKDIR $HOME/$APP_NAME

RUN make

I updated the docker-compose.yml to use my Dockerfile instead of the resume.dockerfile.

version: '2'

services:

  resume-make:
    build:
      context: .
      dockerfile: ./.docker/Dockerfile
    command: make
    container_name: resume-make
    image: resume-make
    volumes:
      - .:/home/app/resume

Now my build takes only 2 minutes instead of 15 minutes and outputs my artifacts! There is just one thing missing. This blog is built using Jekyll and has its own CI process. How can I set up my static site to grab the artifacts from the other repo and rebuild itself?

Unfortunately, gitlab currently has a limitation (issue from Feb 2017) where it cannot grab a private repository’s artifacts to use in another private repo so I’m stuck copying it over the builds manually… which defeats the purpose of this pipeline haha. Still fun to make it.

Use a pre-commit hook #

Lately, I’ve been experimenting with pre-commit and its many hooks. Even local hooks can be used. I added this Makefile first so make build would build my container using the quick container above, then copy the files over from the output/ directory.

.PHONY: build
build:
	docker-compose up
	cp output/resume.html ../0xfeed.gitlab.io/
	cp output/resume.pdf ../0xfeed.gitlab.io/

Next, I setup the .pre-commit-hooks.yaml

repos:
  - repo: local
    hooks:
      - id: build
        name: build
        entry: bash -c 'make build'
        language: system
        types: ["markdown"]
        files: markdown\/.*md
        pass_filenames: false

I installed the hooks locally with pre-commit install. Every time I add a new commit to any of the markdown, the container will run locally, build my resume, and copy it over to the blog repository.

I then navigate to the blog repository and git add -u and repeat the git commit command from before. Once I push up the code, the next pipeline will update this site.

Future Improvements #

Uploading the resume #

If I can upload the resume to an S3 bucket or Google Drive and ensure a static link then I do not need to copy the file from one repo to the other.

Workaround: The pre-commit hook and manually commits work fine.

Multiple resumes for different career paths #

A way to create multiple resumes by breaking down skills and accomplishments into tags. For instance, I’m interested in Security, DevOps, and being a developer. A lot of my skills listed in my resume could be either one of those categories. If I could tag each line item, then I could create a specific resume for security, for devops, etc.

  • generalist - show all line items in spite of tags
  • security - only show line items with tag
  • ops - only show line items with tag
  • dev - only show line items with tag

Workaround: My niche is ops even though I can do the above so I only need one resume path. :)

Anonymize Resume #

Over time I’ve realized that all my data is being scraped by robots and put into databases. That data is then resold to data brokers who then sell the data to contact companies for sales people. After graduating and completing my LinkedIn and having my exact address and other personally identifiable information in my resume, I found my information scattered across the internet. I had to send so many “Remove Me” type emails that it was painful. If you’d like to see your own data, try googling using the following format.

“firstname lastname” + “current company”

Personal information that can be omitted from a resume and can be provided if/when asked

  • last name
  • company names
  • university name
  • graduating year - prevents age discrimination
  • exact address - location based compensation will reduce bargaining
  • email - can be replaced with a google form
  • phone number - can be omitted since they can email you through the google form

Yes, I have done this and received job offers with this type of resume.

Workaround: Continue using the manual anonymized resume and deliver full information over a non-recorded phone conversation… Even if they have to enter the information manually, it’s a reduced risk to me.

Conclusion #

Here are my completed pdf and html resumes!

So far I’ve been a big fan of the pandoc markdown resume using LaTeX but I think I’m finished hacking it. In the next iteration I plan to use HackMyResume for the following reasons.

  • It’s written in a language so it’s hackable whereas the pandoc resume is just a Makefile of commands
  • Counts keywords which is important because robots read the resume more than people
  • Supports many different themes
  • I’ve realized that we don’t need LaTeX to generate a PDF if you can do it using puppeteer and save the html file as a PDF which saves a lot of headache trying to keep themes consistent between formats
  • Lots of other features which explains why it has 1000s of github stars