Skip to main content
/tayyab/portfolio — zsh
tayyab
TA
// dispatch.read --classified=false --access-level: public

Running Postman Tests in CI/CD with Newman and GitHub Actions (Step-by-Step 2026)

March 6, 2026 EST. READ: 13 MIN #API Testing

Running Postman Tests in CI/CD with Newman and GitHub Actions (Step-by-Step 2026)

If you're manually running Postman tests before deployments, you're leaving automated quality checks on the table. Newman—Postman's command-line runner—integrates seamlessly into GitHub Actions to execute your entire API test suite on every push, pull request, and deployment.

In this guide, I'll show you exactly how to set up a production-grade CI/CD pipeline that runs Postman tests, generates reports, and notifies your team of failures—without writing a single line of custom code.

Table of Contents

  1. What You Need
  2. Install Newman Locally
  3. Export Your Postman Collection
  4. Create the GitHub Actions Workflow
  5. Run Tests on Every Push
  6. Advanced: Parallel Execution
  7. Advanced: Slack Notifications
  8. Troubleshooting
  9. FAQ

What You Need

  • A GitHub repository (public or private)
  • Postman collection + environment files (.json)
  • GitHub Actions enabled (default on all repos)
  • Optional: Newman installed locally for testing

Time estimate: 15 minutes setup, then automatic forever.


Install Newman Locally

Before setting up GitHub Actions, test Newman on your machine:

# Install Newman globally
npm install -g newman

# Verify installation
newman --version
# Output: 6.0.1 (or newer)

Newman is just a CLI runner for Postman collections. Think of it as "Postman in the terminal."


Export Your Postman Collection

In Postman, export your collection and environment as JSON files:

  1. Export Collection:

    • Right-click collection → Export → Choose Collection v2.1 format
    • Save as: my-api-collection.json
  2. Export Environment:

    • Gear icon (top right) → Export environment
    • Save as: my-environment.json
    • Include: Base URL, auth tokens, API keys (use {{variables}})
  3. Commit to repo:

    mkdir postman
    mv my-api-collection.json postman/
    mv my-environment.json postman/
    git add postman/
    git commit -m "feat: Add Postman collection and environment"
    

Create the GitHub Actions Workflow

Create a new file in your repo:

mkdir -p .github/workflows

Create .github/workflows/api-tests.yml:

name: API Tests with Newman

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main, develop]

jobs:
  api-tests:
    runs-on: ubuntu-latest
    
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
      
      - name: Install Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
      
      - name: Install Newman
        run: npm install -g newman
      
      - name: Run Postman tests
        run: |
          newman run postman/my-api-collection.json \
            -e postman/my-environment.json \
            --reporters cli,html,json \
            --reporter-html-export reports/newman-report.html \
            --reporter-json-export reports/newman-report.json
      
      - name: Upload test report
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: newman-report
          path: reports/
          retention-days: 30
      
      - name: Publish test results
        if: always()
        uses: dorny/test-reporter@v1
        with:
          name: Newman API Tests
          path: reports/newman-report.json
          reporter: 'postman'

What this does:

  • Triggers on: push to main/develop, pull requests
  • Installs Node.js 20 and Newman
  • Runs your Postman collection with your environment
  • Generates 3 report formats: CLI (logs), HTML (browser-viewable), JSON (machine-readable)
  • Uploads reports as GitHub artifacts (viewable for 30 days)
  • Displays results inline in GitHub PR/commit page

Run Tests on Every Push

Now commit and push the workflow:

git add .github/workflows/api-tests.yml
git commit -m "ci: Add Newman API test workflow"
git push origin feature/add-api-tests

GitHub Actions will automatically:

  1. Clone your repo
  2. Install dependencies
  3. Run Newman with your Postman collection
  4. Show results (pass/fail) on the PR or commit page
  5. Store HTML report as artifact

View results:

  • Go to GitHub → Actions tab
  • Click the workflow run
  • Scroll to "Artifacts" → download newman-report.html
  • Open in browser to see detailed test results

Advanced: Parallel Execution

Run multiple collections simultaneously to speed up CI:

name: API Tests – Parallel

on: [push, pull_request]

jobs:
  auth-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm install -g newman
      - run: |
          newman run postman/auth-collection.json \
            -e postman/my-environment.json \
            --reporters html \
            --reporter-html-export reports/auth-report.html

  product-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm install -g newman
      - run: |
          newman run postman/products-collection.json \
            -e postman/my-environment.json \
            --reporters html \
            --reporter-html-export reports/products-report.html

  payment-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm install -g newman
      - run: |
          newman run postman/payments-collection.json \
            -e postman/my-environment.json \
            --reporters html \
            --reporter-html-export reports/payments-report.html

All three collections run at the same time. Result: 3x faster test execution.


Advanced: Slack Notifications

Add Slack alerts when tests fail:

- name: Notify Slack on failure
  if: failure()
  uses: slackapi/slack-github-action@v1
  with:
    webhook-url: ${{ secrets.SLACK_WEBHOOK }}
    payload: |
      {
        "text": "❌ API Tests Failed",
        "blocks": [
          {
            "type": "section",
            "text": {
              "type": "mrkdwn",
              "text": "*API Tests Failed* on ${{ github.ref }}\n<${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View Details>"
            }
          }
        ]
      }

- name: Notify Slack on success
  if: success()
  uses: slackapi/slack-github-action@v1
  with:
    webhook-url: ${{ secrets.SLACK_WEBHOOK }}
    payload: |
      {
        "text": "✅ All API Tests Passed",
        "blocks": [
          {
            "type": "section",
            "text": {
              "type": "mrkdwn",
              "text": "*API Tests Passed* on ${{ github.ref }}"
            }
          }
        ]
      }

Setup:

  1. Create Slack webhook: Slack API Console → Create App → Incoming Webhooks
  2. Copy webhook URL
  3. Add to repo: Settings → Secrets → New repository secret → SLACK_WEBHOOK

Troubleshooting

Issue: "newman: command not found"

  • Fix: Add npm install -g newman before running tests

Issue: "Postman collection not found"

  • Check: Path to my-api-collection.json is correct
  • Solution: Use ls postman/ to verify files exist in workflow logs

Issue: Tests pass locally but fail in CI

  • Cause: Environment variables different between machines
  • Fix: Use {{variables}} in Postman; set in GitHub environment

Issue: Report artifacts not uploading

  • Fix: Change if: always() on upload step (runs even if tests fail)
  • Check: reports/ directory exists; permissions allow artifact write

Real Project Example

On my AI Sales Assistant project (3 API endpoints, 45 test cases), I use:

  1. Auth collection (10 tests): Login, refresh token, logout
  2. Sales collection (20 tests): CRUD leads, pipeline stages, deal workflows
  3. Analytics collection (15 tests): Report generation, metric calculations

All three run in parallel on every PR. If any fail, Slack notifies the team immediately. Result: 100% of API regressions caught before merge. Zero production API bugs in 6 months.

Total setup time: 30 minutes. Ongoing maintenance: 5 minutes per quarter (updating Postman collection).


FAQ

Q: Do I need a paid Postman account?
A: No. Newman is free. You only need Postman Desktop (free) to create/edit collections.

Q: Can Newman test GraphQL APIs?
A: Yes, but Playwright is better for GraphQL (see my API testing guide).

Q: How do I run tests on a schedule (cron)?
A: Add to workflow:

on:
  schedule:
    - cron: '0 2 * * *'  # Daily at 2 AM UTC

Q: Can I run Newman in Docker?
A: Yes: docker run -v $(pwd)/postman:/etc/newman -t postman/newman run my-api-collection.json

Q: What's the difference between Newman and Playwright for API testing?
A: Newman = Postman's native runner (great for REST, limited GraphQL). Playwright = Browser automation framework (better for complex API scenarios, contract testing, integration tests). I use both: Newman for simple CRUD tests, Playwright for advanced scenarios. Read the comparison.

Q: How do I authenticate with tokens in Newman?
A: Use Postman environment variables:

  1. In Postman environment, create: authorization_token = {{token}}
  2. In your request header: Authorization: Bearer {{authorization_token}}
  3. Newman reads from environment automatically

Q: Can I fail the build if tests don't meet a threshold?
A: Yes, add to Newman command: --bail (stops on first failure) or --timeout-request 5000 (fail slow requests).

Q: Where do I store sensitive API keys?
A: Use GitHub Secrets. In workflow: POSTMAN_API_KEY: ${{ secrets.POSTMAN_API_KEY }}. In Postman environment variable, use the secret as the initial value.

Tayyab Akmal
// author

Tayyab Akmal

AI & QA Automation Engineer

I've caught critical bugs in fintech, e-commerce, and SaaS platforms — then built the automation that prevents them from shipping again. 6+ years scaling test automation and AI-driven QA.

// feedback_channel

FOUND THIS USEFUL?

Share your thoughts or let's discuss automation testing strategies.

→ Start Conversation
Available for hire