AmruthPillai/Reactive-Resume

[Bug] Download PDF fails due to ProtocolError

GuyTuval opened this issue ยท 4 comments

Is there an existing issue for this?

  • Yes, I have searched the existing issues and none of them match my problem.

Product Variant

Self-Hosted

Current Behavior

Whenever I try to download my resume as PDF, a new tab with the link "about:blank" is opened instead of a new tab with the generated PDF. I noticed that exporting JSON works though.
If anything else would help investigate the problem please let me know :)

Expected Behavior

The PDF file should be generated and downloaded

Steps To Reproduce

  1. Change "simple.yml" to "docker-compose.yml".
  2. Execute command docker compose up
  3. Open Chrome installed on my Windows machine.
  4. Go to localhost:3000
  5. Go to your dashboard.
  6. Create a new resume.
  7. Press the "Download PDF" button at the bottom of the page.

What browsers are you seeing the problem on?

Firefox, Chrome

What template are you using?

Gengar

Anything else?

I'm running docker compose via wsl 2.
A detailed trace:

chrome-1    |   browserless.io:limiter Calling timeout handler +0ms
chrome-1    |   browserless.io:router Websocket job has timedout, sending 429 response +8s
chrome-1    |   browserless.io:browser-manager 0 Client(s) are currently connected +8s
chrome-1    |   browserless.io:browser-manager Closing browser session +0ms
chrome-1    |   browserless.io:browser-manager Deleting "/tmp/browserless-data-dirs/browserless-data-dir-8d6ba3c0-cb9e-4c92-b9a8-157a391feff1" user-data-dir and session from memory +0ms
chrome-1    |   browserless.io:browsers:chromium:cdp Closing browser process and all listeners +10s
app-1       | Trace: TargetCloseError: Protocol error (Page.navigate): Target closed
app-1       |     at CallbackRegistry.clear (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/common/CallbackRegistry.js:75:36)
chrome-1    |   browserless.io:browser-manager Deleting data directory "/tmp/browserless-data-dirs/browserless-data-dir-8d6ba3c0-cb9e-4c92-b9a8-157a391feff1" +3ms
app-1       |     at CdpCDPSession._onClosed (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/CDPSession.js:101:25)
app-1       |     at #onClose (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Connection.js:157:21)
app-1       |     at WebSocket.<anonymous> (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/node/NodeWebSocketTransport.js:47:30)
app-1       |     at callListener (/app/node_modules/.pnpm/ws@8.16.0/node_modules/ws/lib/event-target.js:290:14)
app-1       |     at WebSocket.onClose (/app/node_modules/.pnpm/ws@8.16.0/node_modules/ws/lib/event-target.js:220:9)
app-1       |     at WebSocket.emit (node:events:518:28)
app-1       |     at WebSocket.emit (node:domain:488:12)
app-1       |     at WebSocket.emitClose (/app/node_modules/.pnpm/ws@8.16.0/node_modules/ws/lib/websocket.js:265:10)
app-1       |     at Socket.socketOnClose (/app/node_modules/.pnpm/ws@8.16.0/node_modules/ws/lib/websocket.js:1289:15) {
app-1       |   cause: ProtocolError
app-1       |       at <instance_members_initializer> (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/common/CallbackRegistry.js:96:14)
app-1       |       at new Callback (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/common/CallbackRegistry.js:100:16)
app-1       |       at CallbackRegistry.create (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/common/CallbackRegistry.js:22:26)
app-1       |       at Connection._rawSend (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Connection.js:80:26)
app-1       |       at CdpCDPSession.send (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/CDPSession.js:66:33)
app-1       |       at navigate (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Frame.js:160:51)
app-1       |       at CdpFrame.goto (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/cdp/Frame.js:138:17)
app-1       |       at CdpFrame.<anonymous> (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/util/decorators.js:98:27)
app-1       |       at CdpPage.goto (/app/node_modules/.pnpm/puppeteer-core@22.6.0/node_modules/puppeteer-core/lib/cjs/puppeteer/api/Page.js:590:43)
app-1       |       at PrinterService.generateResume (/app/dist/apps/server/main.js:13211:24)
app-1       | }
app-1       |     at PrinterService.generateResume (/app/dist/apps/server/main.js:13261:21)
app-1       |     at async PrinterService.printResume (/app/dist/apps/server/main.js:13158:21)
app-1       |     at async ResumeService.printResume (/app/dist/apps/server/main.js:13993:21)
app-1       |     at async ResumeController.printResume (/app/dist/apps/server/main.js:13615:25)
chrome-1    |   browserless.io:limiter Job has hit timeout after 10,002ms of activity. +2s
chrome-1    |   browserless.io:limiter Calling timeout handler +0ms
chrome-1    |   browserless.io:router Websocket job has timedout, sending 429 response +2s
chrome-1    |   browserless.io:limiter (Running: 0, Pending: 0) All jobs complete.  +1ms

Hi GuyTuval - I'm going to copy in the solution that worked for me, in the hope that it also helps things for you. I'm not a maintainer of the repo, just someone who experienced the same issues starting from simple.yml

  • Under app changing to:
PUBLIC_URL: http://localhost:3000/
STORAGE_URL: http://localhost:9000/default
  • In the chrome container adding the following under "restart:unless-stopped"
extra_hosts:
- host.docker.internal:host-gateway

Doing so changed the link from about:blank to http://localhost/pdfcode.pdf. For me localhost still wasn't sufficient, but replacing the localhost in the provided url to the address I'd been using to access react-resume then allowed me to access the pdf.

For reference, the full modified version of simple.yml that worked for me is as follows

version: "3.8"

# In this Docker Compose example, it assumes that you maintain a reverse proxy externally (or chose not to).
# The only two exposed ports here are from minio (:9000) and the app itself (:3000).
# If these ports are changed, ensure that the env vars passed to the app are also changed accordingly.

services:
  # Database (Postgres)
  postgres:
    image: postgres:15-alpine
    restart: unless-stopped
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: postgres
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres -d postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Storage (for image uploads)
  minio:
    image: minio/minio
    restart: unless-stopped
    command: server /data
    ports:
      - 9000:9000
    volumes:
      - minio_data:/data
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin

  # Chrome Browser (for printing and previews)
  chrome:
    image: ghcr.io/browserless/chromium:latest
    restart: unless-stopped
    extra_hosts:
      - host.docker.internal:host-gateway    
    environment:
      TIMEOUT: 10000
      CONCURRENT: 10
      TOKEN: chrome_token
      EXIT_ON_HEALTH_FAILURE: true
      PRE_REQUEST_HEALTH_CHECK: true

  # Redis (for cache & server session management)
  redis:
    image: redis:alpine
    restart: unless-stopped
    command: redis-server --requirepass password

  app:
    image: amruthpillai/reactive-resume:latest
    restart: unless-stopped
    ports:
      - 3000:3000
    depends_on:
      - postgres
      - minio
      - redis
      - chrome
    environment:
      # -- Environment Variables --
      PORT: 3000
      NODE_ENV: production

      # -- URLs --
      PUBLIC_URL: http://localhost:3000
      STORAGE_URL: http://localhost:9000/default      

      # -- Printer (Chrome) --
      CHROME_TOKEN: chrome_token
      CHROME_URL: ws://chrome:3000

      # -- Database (Postgres) --
      DATABASE_URL: postgresql://postgres:postgres@postgres:5432/postgres

      # -- Auth --
      ACCESS_TOKEN_SECRET: access_token_secret
      REFRESH_TOKEN_SECRET: refresh_token_secret

      # -- Emails --
      MAIL_FROM: noreply@localhost
      # SMTP_URL: smtp://user:pass@smtp:587 # Optional

      # -- Storage (Minio) --
      STORAGE_ENDPOINT: minio
      STORAGE_PORT: 9000
      STORAGE_REGION: us-east-1 # Optional
      STORAGE_BUCKET: default
      STORAGE_ACCESS_KEY: minioadmin
      STORAGE_SECRET_KEY: minioadmin
      STORAGE_USE_SSL: false

      # -- Cache (Redis) --
      REDIS_URL: redis://default:password@redis:6379

      # -- Sentry --
      # VITE_SENTRY_DSN: https://id.sentry.io # Optional

      # -- Crowdin (Optional) --
      # CROWDIN_PROJECT_ID:
      # CROWDIN_PERSONAL_TOKEN:

      # -- Email (Optional) --
      # DISABLE_EMAIL_AUTH: true
      # VITE_DISABLE_SIGNUPS: true

      # -- GitHub (Optional) --
      GITHUB_CLIENT_ID: github_client_id
      GITHUB_CLIENT_SECRET: github_client_secret
      GITHUB_CALLBACK_URL: http://localhost:3000/api/auth/github/callback

      # -- Google (Optional) --
      GOOGLE_CLIENT_ID: google_client_id
      GOOGLE_CLIENT_SECRET: google_client_secret
      GOOGLE_CALLBACK_URL: http://localhost:3000/api/auth/google/callback

volumes:
  minio_data:
  postgres_data:

@andrew-cullen Thanks, it worked!
That said, when I tried to change the environment variables of the app and the chrome containers it still did not work.
However, when I tried to replace docker-compose.yml content with your reference I was able to download the PDF.
I'm uncertain what else was changed.
Would you kindly elaborate on this for future clients experiencing this behavior?

Glad to hear that it worked for you!

App and chrome were the only two changes. At a guess, it may have been that the volumes had still been caching some of the incorrect configuration? I definitely had to clear all my volumes before it worked, and maybe the complete change in .yml files did the same for you. Honestly I'm not sure - I went through the same experience you had 48 hours ago, and that was my experience.

Thank you for this! It worked for me.
In place of the localhost, I used my local server IP and it worked right away.