Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Third Party Integration - HTTP API, OAuth2 Provider API, Plugin API, Library vs Framework, Smart Tokens #166

Open
joshuakarp opened this issue May 28, 2021 · 23 comments
Labels
design Requires design r&d:polykey:core activity 1 Secret Vault Sharing and Secret History Management research Requires research

Comments

@joshuakarp
Copy link
Contributor

joshuakarp commented May 28, 2021

Created by @CMCDragonkai

Once we reintroduce OAuth2 into Polykey, because our OAuth2 requirements are pretty simple, it would be beneficial to implement it from scratch to reduce dependency requirements.

We have already implemented an Oauth2 client side from scratch in MR 141.

This would mean these dependencies we can get rid of:

    "oauth2orize": "^1.11.0",
    "passport-oauth2-client-password": "^0.1.2",
    "@types/oauth2orize": "^1.8.8",
    "@types/passport-oauth2-client-password": "^0.1.2",

Actually @CMCDragonkai is in favour of getting rid of password related libraries entirely so that we can be much lighterweight.

Like all of these:

    "passport": "^0.4.1",
    "passport-http": "^0.3.0",
    "passport-http-bearer": "^1.0.1",
    "@types/passport-http": "^0.3.8",
    "@types/passport-http-bearer": "^1.0.36",

In our usage of OAuth2 server-side, we would expect only a 2-legged flow. For example a "client-credentials-flow": https://docs.microsoft.com/en-us/linkedin/shared/authentication/client-credentials-flow

@CMCDragonkai : A client credentials flow is alot easier. And we should expect to just keep track of tokens, which we can use our vault system to store again.

@joshuakarp joshuakarp changed the title Reimplement Oauth2 server from scratch Implement Oauth2 server from scratch May 28, 2021
@joshuakarp joshuakarp changed the title Implement Oauth2 server from scratch Implement OAuth2 server from scratch May 28, 2021
@CMCDragonkai
Copy link
Member

In addition to rebuilding OAuth2 server from scratch, there's a related problem about having an HTTP API.

GRPC is compatible with HTTP API if there's a gateway that translates our GRPC calls to HTTP.

Having an HTTP API will make it easier for Polykey agent to be integrated with other kinds of clients like Web browsers which don't support GRPC.

Right now the GRPC ecosystem has a thing called "grpc-gateway" which is a go-based server that generates HTTP API from a common protobuf spec with GRPC. However this is written in Go and cannot be integrated easily into our Node ecosystem.

We can manually create an HTTP API and call the same internal domain methods but this does add additional maintenance cost as we now have to maintain conformance between the GRPC API and the HTTP API.

Note we have past issue about using fastify instead of express for implementing an HTTP API: https://gitlab.com/MatrixAI/Engineering/Polykey/js-polykey/-/issues/204

Therefore:

@CMCDragonkai
Copy link
Member

I'm going to elevate this issue into an Epic, as it is quite relevant multiple potential subissues.

  • Native HTTP API for polykey-agent
  • How to maintain conformance between GRPC and HTTP API
  • Ensuring that this works for browsers to allow browser extensions in the future
  • OAuth2 as an authorisation and authentication system to Polykey
  • How this integrates with the existing SessionManagement we have

@CMCDragonkai CMCDragonkai added epic Big issue with multiple subissues design Requires design labels Jul 5, 2021
@CMCDragonkai CMCDragonkai changed the title Implement OAuth2 server from scratch HTTP API and OAuth2 Provider API for PK Agents for Third Party Integration Jul 5, 2021
@CMCDragonkai
Copy link
Member

CMCDragonkai commented Aug 16, 2021

I'm removing these dependencies and they deserve a review on whether they are needed to implement this:

    "express": "^4.17.1",
    "express-openapi-validator": "^4.0.4",
    "express-session": "^1.17.1",
    "js-yaml": "^3.3.0",
    "jsonwebtoken": "^8.5.1",
    "oauth2orize": "^1.11.0",
    "passport": "^0.4.1",
    "passport-http": "^0.3.0",
    "passport-http-bearer": "^1.0.1",
    "passport-oauth2-client-password": "^0.1.2",
    "swagger-ui-express": "^4.1.4",
    "@types/express-session": "^1.17.0",
    "@types/js-yaml": "^3.12.5",
    "@types/jsonwebtoken": "^8.5.0",
    "@types/oauth2orize": "^1.8.8",
    "@types/passport-http": "^0.3.8",
    "@types/passport-http-bearer": "^1.0.36",
    "@types/passport-oauth2-client-password": "^0.1.2",
    "@types/swagger-ui-express": "^4.1.2",
    "swagger-node-codegen": "^1.6.3",

There are way too many dependencies just to implement a HTTP API from the old code. Any new code should focus on something like this:

  1. An http router server - something simpler than express, perhaps fastify or 0http - as minimal as possible because we are doing very basic things here in our http server, we just want to route it like the handlers we have for our grpc client server
  2. The ability to implement an oauth2 server for authentication/authorisation, because we have JWT being used already via jose, this is something we should be able to implement ourselves directly without any additional dependencies (our jwt tokens should mean that our authentication is stateless, and it is expected to be put on the HTTP request headers)
  3. A swagger api generator and swagger api outputter - this again should be as minimal as possible and perhaps fastify would be better https://github.com/fastify/fastify-swagger

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Aug 16, 2021

Here is the old code that can be revitalised when we investigate this:

HttpApi.ts
import fs from 'fs';
import net from 'net';
import path from 'path';
import jsyaml from 'js-yaml';
import { promisify } from 'util';
import Logger from '@matrixai/logger';

/** Internal */
import {
  PK_BOOTSTRAP_HOSTS,
  PK_NODE_PORT_HTTP,
  PK_NODE_ADDR_HTTP,
} from '../config';

import { Address } from '../nodes/Node';
import { TLSCredentials } from '../nodes/pki/PublicKeyInfrastructure';

// import passport from 'passport';
// import http from 'http';
// import https from 'https';
// import session from 'express-session';
// import swaggerUI from 'swagger-ui-express';
// import { Strategy as ClientPasswordStrategy } from 'passport-oauth2-client-password';
// import { BasicStrategy } from 'passport-http';
// import express, { RequestHandler } from 'express';
// import * as utils from './AuthorizationServer/utils';
// import * as config from './AuthorizationServer/Config';
// import * as OpenApiValidator from 'express-openapi-validator';
// import { User, Client } from './AuthorizationServer/OAuth2Store';
// import OAuth2 from './AuthorizationServer/OAuth2';
// import { Strategy as BearerStrategy } from 'passport-http-bearer';

class HttpApi {
  private openApiPath: string;
  private logger: Logger;

  private updateApiAddress: (apiAddress: Address) => void;
  private handleCSR: (csr: string) => string;
  private getRootCertificate: () => string;
  private getCertificateChain: () => string[];
  private getTlsCredentials: () => TLSCredentials;
  private getVaultNames: () => string[];
  private newVault: (vaultName: string) => Promise<void>;
  private deleteVault: (vaultName: string) => Promise<void>;
  private listSecrets: (vaultName: string) => string[];
  private getSecret: (vaultName: string, secretName: string) => Buffer;
  private newSecret: (
    vaultName: string,
    secretName: string,
    secretContent: Buffer,
  ) => Promise<void>;
  private deleteSecret: (
    vaultName: string,
    secretName: string,
  ) => Promise<void>;

  private tlsCredentials: TLSCredentials;
  private oauth: OAuth2;
  private expressServer: express.Express;
  private httpServer: http.Server;

  constructor(
    updateApiAddress: (apiAddress: Address) => void,
    handleCSR: (csr: string) => string,
    getRootCertificate: () => string,
    getCertificateChain: () => string[],
    getTlsCredentials: () => TLSCredentials,
    getVaultNames: () => string[],
    newVault: (vaultName: string) => Promise<void>,
    deleteVault: (vaultName: string) => Promise<void>,
    listSecrets: (vaultName: string) => string[],
    getSecret: (vaultName: string, secretName: string) => Buffer,
    newSecret: (
      vaultName: string,
      secretName: string,
      secretContent: string | Buffer,
    ) => Promise<void>,
    deleteSecret: (vaultName: string, secretName: string) => Promise<void>,
    logger?: Logger,
  ) {
    this.openApiPath = path.join(__dirname, '../openapi.yaml');
    this.updateApiAddress = updateApiAddress;
    this.handleCSR = handleCSR;
    this.getRootCertificate = getRootCertificate;
    this.getCertificateChain = getCertificateChain;
    this.getTlsCredentials = getTlsCredentials;
    this.getVaultNames = getVaultNames;
    this.newVault = newVault;
    this.deleteVault = deleteVault;
    this.listSecrets = listSecrets;
    this.getSecret = getSecret;
    this.newSecret = newSecret;
    this.deleteSecret = deleteSecret;
    this.logger = logger ?? new Logger();
  }

  async stop() {
    if (this.httpServer) {
      this.logger.info('Shutting down HTTP server');
      await promisify(this.httpServer.close)();
    }
  }

  async start(host = PK_NODE_ADDR_HTTP, port = parseInt(PK_NODE_PORT_HTTP)) {
    return new Promise<number>((resolve, reject) => {
      try {
        this.tlsCredentials = this.getTlsCredentials();
        this.oauth = new OAuth2(
          this.tlsCredentials.keypair.public,
          this.tlsCredentials.keypair.private,
          // this.logger.getLogger('OAuth2'),
          this.logger,
        );
        this.expressServer = express();

        this.expressServer.set('view engine', 'ejs');
        // Session Configuration
        const MemoryStore = session.MemoryStore;
        this.expressServer.use(
          session({
            saveUninitialized: true,
            resave: true,
            secret: 'secret',
            store: new MemoryStore(),
            cookie: { maxAge: 3600000 * 24 * 7 * 52 },
          }),
        );

        this.expressServer.use(express.json());
        this.expressServer.use(express.text());
        this.expressServer.use(express.urlencoded({ extended: false }));

        // create default client and user for the polykey node (highest priviledge)
        this.oauth.store.saveClient(
          'polykey',
          utils.createUuid(),
          ['admin'],
          true,
        );
        this.oauth.store.saveUser(
          'polykey',
          'polykey',
          utils.createUuid(),
          ['admin'],
          true,
        );

        this.expressServer.use(passport.initialize());
        this.expressServer.use(passport.session());

        // redirect from base url to docs
        this.expressServer.get('/', (req, res) => {
          res.redirect('/docs');
        });

        passport.use(
          'clientBasic',
          new BasicStrategy((clientId, clientSecret, done) => {
            try {
              const client = this.oauth.store.getClient(clientId);
              client.validate(clientSecret);
              done(null, client);
            } catch (error) {
              done(null, false);
            }
          }),
        );
        /**
         * BearerStrategy
         *
         * This strategy is used to authenticate either users or clients based on an access token
         * (aka a bearer token).  If a user, they must have previously authorized a client
         * application, which is issued an access token to make requests on behalf of
         * the authorizing user.
         *
         * To keep this example simple, restricted scopes are not implemented, and this is just for
         * illustrative purposes
         */
        passport.use(
          'accessToken',
          new BearerStrategy((token, done) => {
            try {
              const accessToken = this.oauth.store.getAccessToken(token);
              const user = this.oauth.store.getUser(accessToken.userId!);
              done(null, user, { scope: accessToken.scope ?? [] });
            } catch (error) {
              done(null, false);
            }
          }),
        );

        /**
         * Client Password strategy
         *
         * The OAuth 2.0 client password authentication strategy authenticates clients
         * using a client ID and client secret. The strategy requires a verify callback,
         * which accepts those credentials and calls done providing a client.
         */
        passport.use(
          'clientPassword',
          new ClientPasswordStrategy((clientId, clientSecret, done) => {
            try {
              const client = this.oauth.store.getClient(clientId);
              client.validate(clientSecret);
              done(null, client);
            } catch (error) {
              done(null, false);
            }
          }),
        );

        // Register serialialization and deserialization functions.
        //
        // When a client redirects a user to user authorization endpoint, an
        // authorization transaction is initiated.  To complete the transaction, the
        // user must authenticate and approve the authorization request.  Because this
        // may involve multiple HTTPS request/response exchanges, the transaction is
        // stored in the session.
        //
        // An application must supply serialization functions, which determine how the
        // client object is serialized into the session.  Typically this will be a
        // simple matter of serializing the client's ID, and deserializing by finding
        // the client by ID from the database.
        passport.serializeUser((user: User, done) => {
          done(null, user.id);
        });

        passport.deserializeUser((id: string, done) => {
          try {
            const user = this.oauth.store.getUser(id);
            done(null, user);
          } catch (error) {
            done(error);
          }
        });

        // token endpoints
        this.expressServer.post('/oauth/token', this.oauth.token);
        this.expressServer.post('/oauth/refresh', this.oauth.token);
        this.expressServer.get('/oauth/tokeninfo', [
          passport.authenticate(['accessToken'], { session: true }),
          this.oauth.tokenInfo.bind(this.oauth),
        ]);
        this.expressServer.get(
          '/oauth/revoke',
          this.oauth.revokeToken.bind(this.oauth),
        );

        // OpenAPI endpoints
        const schema = jsyaml.load(
          fs.readFileSync(this.openApiPath).toString(),
        );
        this.expressServer.get('/spec', (req, res) => {
          res.type('json').send(JSON.stringify(schema, null, 2));
        });
        this.expressServer.use(
          '/docs',
          swaggerUI.serve,
          swaggerUI.setup(schema, undefined, {
            oauth: {
              clientId: 'polykey',
            },
          }),
        );

        this.expressServer.use(
          OpenApiValidator.middleware({
            apiSpec: schema,
            validateResponses: true,
          }),
        );
        this.setupOpenApiRouter();

        // Start the server
        const pkHost = PK_BOOTSTRAP_HOSTS ?? 'localhost';
        const httpsOptions: https.ServerOptions = {
          cert: this.tlsCredentials.certificate,
          key: this.tlsCredentials.keypair.private,
          ca: this.tlsCredentials.rootCertificate,
        };

        this.httpServer = https
          .createServer(httpsOptions, this.expressServer)
          .listen({ port, host }, () => {
            const addressInfo = this.httpServer.address() as net.AddressInfo;
            const address = Address.fromAddressInfo(addressInfo);
            address.updateHost(pkHost);
            this.updateApiAddress(address);

            this.logger.info(
              `HTTP API endpoint: https://${address.toString()}`,
            );
            this.logger.info(
              `HTTP API docs: https://${address.toString()}/docs/`,
            );

            resolve(port);
          });
      } catch (error) {
        reject(error);
      }
    });
  }

  getOAuthClient(): Client {
    return this.oauth.store.getClient('polykey');
  }

  listOAuthTokens(): string[] {
    return Array.from(this.oauth.store.accessTokenStore.keys());
  }

  newOAuthToken(scopes: string[] = [], expiry = 3600): string {
    const expiryDate = new Date(Date.now() + expiry * 1000);
    const token = utils.createToken(
      this.oauth.store.privateKey,
      config.token.expiresIn,
      'polykey',
    );
    this.oauth.store.saveAccessToken(
      token,
      expiryDate,
      'polykey',
      'polykey',
      scopes,
    );
    return token;
  }

  revokeOAuthToken(token: string): boolean {
    this.oauth.store.deleteAccessToken(token);
    return !this.oauth.store.hasAccessToken(token);
  }

  // === openapi endpoints === //
  private handleRootCertificateRequest: RequestHandler = async (req, res) => {
    try {
      const response = this.getRootCertificate();
      this.writeString(res, response);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleCertificateChainRequest: RequestHandler = async (req, res) => {
    try {
      const response = this.getCertificateChain();
      this.writeStringList(res, response);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleCertificateSigningRequest: RequestHandler = async (
    req,
    res,
  ) => {
    try {
      const body = req.body;
      const response = this.handleCSR(body);
      this.writeString(res, response);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleVaultsListRequest: RequestHandler = async (req, res) => {
    try {
      const response = this.getVaultNames();
      this.writeStringList(res, response);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleNewVaultRequest: RequestHandler = async (req, res) => {
    try {
      const vaultName = (req as any).openapi.pathParams.vaultName;
      await this.newVault(vaultName);
      this.writeSuccess(res);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleDeleteVaultRequest: RequestHandler = async (req, res) => {
    try {
      const vaultName = (req as any).openapi.pathParams.vaultName;
      await this.deleteVault(vaultName);
      this.writeSuccess(res);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleSecretsListRequest: RequestHandler = async (req, res) => {
    try {
      const vaultName = (req as any).openapi.pathParams.vaultName;
      const response = this.listSecrets(vaultName);
      this.writeStringList(res, response);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleGetSecretRequest: RequestHandler = async (req, res) => {
    try {
      const vaultName = (req as any).openapi.pathParams.vaultName;
      const secretName = (req as any).openapi.pathParams.secretName;
      const response = this.getSecret(vaultName, secretName);

      const accepts = req.accepts()[0];
      if (!accepts || accepts == 'text/plain' || accepts == '*/*') {
        this.writeString(res, response.toString());
      } else if (accepts == 'application/octet-stream') {
        this.writeBinary(res, secretName, response);
      } else {
        throw Error(`MIME type not supported: ${accepts}`);
      }
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleNewSecretRequest: RequestHandler = async (req, res) => {
    try {
      const vaultName = (req as any).openapi.pathParams.vaultName;
      const secretName = (req as any).openapi.pathParams.secretName;

      let secretContent: Buffer;
      const contentType = req.headers['content-type'];
      if (contentType == 'text/plain') {
        secretContent = Buffer.from(req.body);
      } else if (contentType == 'application/octet-stream') {
        secretContent = await new Promise<Buffer>((resolve, reject) => {
          const bufferList: Buffer[] = [];
          req.on('data', (data) => bufferList.push(data));
          req.on('error', (err) => reject(err));
          req.on('end', () => resolve(Buffer.concat(bufferList)));
        });
      } else {
        throw Error(`MIME type not supported: ${contentType}`);
      }

      await this.newSecret(vaultName, secretName, secretContent);
      this.writeSuccess(res);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  private handleDeleteSecretRequest: RequestHandler = async (req, res) => {
    try {
      const vaultName = (req as any).openapi.pathParams.vaultName;
      const secretName = (req as any).openapi.pathParams.secretName;
      await this.deleteSecret(vaultName, secretName);
      this.writeSuccess(res);
    } catch (error) {
      this.writeError(res, error);
    }
  };

  // === Helper methods === //
  private writeSuccess(res: http.ServerResponse) {
    res.writeHead(200);
    res.end();
  }

  private writeError(res: http.ServerResponse, error: Error) {
    res.writeHead(500, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify({ error: error.message }, null, 2));
  }

  private writeString(res: http.ServerResponse, text: string) {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    res.end(text);
  }

  private writeStringList(res: http.ServerResponse, list: string[]) {
    res.writeHead(200, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify(list, null, 2));
  }

  private writeJson(res: http.ServerResponse, payload: Record<string, any>) {
    res.writeHead(200, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify(payload, null, 2));
  }

  private writeBinary(
    res: http.ServerResponse,
    filename: string,
    payload: Buffer,
  ) {
    res.writeHead(200, {
      'Content-Type': 'application/octet-stream',
      'Content-Disposition': `file; filename="${filename}"`,
    });
    res.end(payload, 'binary');
  }

  private checkScope(scope: string[]) {
    return (req, res, next) => {
      // access control middleware to check for required scope
      if (!scope.some((r) => req.authInfo.scope.includes(r))) {
        res.statusCode = 403;
        return res.end('Forbidden');
      }
      return next();
    };
  }

  /**
   * The purpose of this route is to collect the request variables as defined in the
   * OpenAPI document and pass them to the handling controller as another Express
   * middleware. All parameters are collected in the requet.swagger.values key-value object
   *
   * The assumption is that security handlers have already verified and allowed access
   * to this path. If the business-logic of a particular path is dependant on authentication
   * parameters (e.g. scope checking) - it is recommended to define the authentication header
   * as one of the parameters expected in the OpenAPI/Swagger document.
   *
   * Requests made to paths that are not in the OpernAPI scope
   * are passed on to the next middleware handler.
   */
  private setupOpenApiRouter() {
    // setup all endpoints
    ///////////////////////////
    // Certificate Authority //
    ///////////////////////////
    this.expressServer.get('/ca/root_certificate', [
      passport.authenticate(['accessToken'], { session: true }),
      this.handleRootCertificateRequest.bind(this),
    ]);
    this.expressServer.get('/ca/certificate_chain', [
      passport.authenticate(['accessToken'], { session: true }),
      this.handleCertificateChainRequest.bind(this),
    ]);
    this.expressServer.post('/ca/certificate_signing_request', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'request_certificate']),
      this.handleCertificateSigningRequest.bind(this),
    ]);
    ////////////
    // Vaults //
    ////////////
    this.expressServer.get('/vaults', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'write_vaults', 'read_vaults']),
      this.handleVaultsListRequest.bind(this),
    ]);
    this.expressServer.post('/vaults/:vaultName', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'write_vaults']),
      this.handleNewVaultRequest.bind(this),
    ]);
    this.expressServer.delete('/vaults/:vaultName', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'write_vaults']),
      this.handleDeleteVaultRequest.bind(this),
    ]);
    /////////////
    // Secrets //
    /////////////
    this.expressServer.get('/vaults/:vaultName', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'write_secrets', 'read_secrets']),
      this.handleSecretsListRequest.bind(this),
    ]);
    this.expressServer.get('/secrets/:vaultName/:secretName', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'write_secrets', 'read_secrets']),
      this.handleGetSecretRequest.bind(this),
    ]);
    this.expressServer.post('/secrets/:vaultName/:secretName', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'write_secrets']),
      this.handleNewSecretRequest.bind(this),
    ]);
    this.expressServer.delete('/secrets/:vaultName/:secretName', [
      passport.authenticate(['accessToken'], { session: true }),
      this.checkScope(['admin', 'write_secrets']),
      this.handleDeleteSecretRequest.bind(this),
    ]);
  }
}

export default HttpApi;
Config.ts
const token = {
  expiresIn: 60 * 60,
  calculateExpirationDate: () => new Date(Date.now() + 60 * 60 * 1000),
};

const refreshToken = {
  expiresIn: 52560000,
};

export { token, refreshToken };
OAuth2Store.ts
import {
  ErrorAuthCodeUndefined,
  ErrorInvalidCredentials,
  ErrorInvalidSecret,
  ErrorTokenUndefined,
  ErrorUserUndefined,
  ErrorClientUndefined,
} from '../../errors';
import { createUuid } from './utils';

class AuthorizationCode {
  code: string;
  clientId: string;
  redirectURI: string;
  userId: string;
  scope: string[];

  constructor(
    code: string,
    clientId: string,
    redirectURI: string,
    userId: string,
    scope: string[],
  ) {
    this.code = code;
    this.clientId = clientId;
    this.redirectURI = redirectURI;
    this.userId = userId;
    this.scope = scope;
  }
}
class AccessToken {
  token: string;
  expiration: Date;
  userId?: string;
  clientId?: string;
  scope?: string[];
  constructor(
    token: string,
    expiration: Date,
    userId: string,
    clientId: string,
    scope: string[],
  ) {
    this.token = token;
    this.expiration = expiration;
    this.userId = userId;
    this.clientId = clientId;
    this.scope = scope;
  }
}

class Client {
  id: string;
  private secret: string;
  scope: string[];
  trusted: boolean;
  constructor(
    id: string,
    secret: string,
    scope: string[] = [],
    trusted = false,
  ) {
    this.id = id;
    this.secret = secret;
    this.scope = scope;
    this.trusted = trusted;
  }

  updateSecret(secret: string) {
    this.secret = secret;
  }

  public get Secret(): string {
    return this.secret;
  }

  validate(secret: string) {
    if (this.secret != secret) {
      throw new ErrorInvalidSecret('secret does not match');
    }
  }
}

class User {
  id: string;
  username: string;
  private password: string;
  scope: string[];
  trusted: boolean;
  constructor(
    id: string,
    username: string,
    password: string,
    scope: string[] = [],
    trusted = false,
  ) {
    this.id = id;
    this.username = username;
    this.password = password;
    this.scope = scope;
    this.trusted = trusted;
  }

  updatePassword(password: string) {
    this.password = password;
  }

  public get Password(): string {
    return this.password;
  }

  validate(password: string) {
    if (this.password != password) {
      throw new ErrorInvalidCredentials('password does not match');
    }
  }
}

class OAuth2Store {
  accessCodeStore: Map<string, AuthorizationCode>;
  accessTokenStore: Map<string, AccessToken>;
  refreshTokenStore: Map<string, AccessToken>;
  clientStore: Map<string, Client>;
  userStore: Map<string, User>;

  publicKey: string;
  privateKey: string;

  constructor(publicKey: string, privateKey: string) {
    this.accessCodeStore = new Map();
    this.accessTokenStore = new Map();
    this.refreshTokenStore = new Map();
    this.clientStore = new Map();
    this.userStore = new Map();

    this.publicKey = publicKey;
    this.privateKey = privateKey;
  }

  ////////////////////////
  // Authorization Code //
  ////////////////////////
  hasAuthorizationCode(code: string): boolean {
    return this.accessCodeStore.has(code);
  }

  getAuthorizationCode(code: string): AuthorizationCode {
    if (!this.accessCodeStore.has(code)) {
      throw new ErrorAuthCodeUndefined('authorization code does not exist');
    }
    return this.accessCodeStore.get(code)!;
  }

  saveAuthorizationCode(
    code: string,
    clientId: string,
    redirectURI: string,
    userId: string,
    scope: string[],
  ): void {
    this.accessCodeStore.set(
      code,
      new AuthorizationCode(code, clientId, redirectURI, userId, scope),
    );
  }

  deleteAuthorizationCode(code: string): AuthorizationCode {
    const ac = this.getAuthorizationCode(code);
    this.accessCodeStore.delete(code);
    return ac;
  }

  ///////////////////
  // Access Tokens //
  ///////////////////
  hasAccessToken(token: string): boolean {
    return this.accessTokenStore.has(token);
  }

  getAccessToken(token: string): AccessToken {
    if (!this.accessTokenStore.has(token)) {
      throw new ErrorTokenUndefined('access token does not exist');
    }
    return this.accessTokenStore.get(token)!;
  }

  saveAccessToken(
    token: string,
    expiration: Date,
    userId: string,
    clientId: string,
    scope: string[] = [],
  ): AccessToken {
    this.accessTokenStore.set(
      token,
      new AccessToken(token, expiration, userId, clientId, scope),
    );
    return this.accessTokenStore.get(token)!;
  }

  deleteAccessToken(token: string): AccessToken {
    const at = this.getAccessToken(token);
    this.accessTokenStore.delete(token);
    return at;
  }

  ////////////////////
  // Refresh Tokens //
  ////////////////////
  hasRefreshToken(token: string): boolean {
    return this.refreshTokenStore.has(token);
  }

  getRefreshToken(token: string): AccessToken {
    if (!this.refreshTokenStore.has(token)) {
      throw new ErrorTokenUndefined('refresh token does not exist');
    }
    return this.refreshTokenStore.get(token)!;
  }

  saveRefreshToken(
    token: string,
    expiration: Date,
    userId: string,
    clientId: string,
    scope: string[] = [],
  ): AccessToken {
    this.refreshTokenStore.set(
      token,
      new AccessToken(token, expiration, userId, clientId, scope),
    );
    return this.refreshTokenStore.get(token)!;
  }

  deleteRefreshToken(token: string): AccessToken {
    const rt = this.getRefreshToken(token);
    this.refreshTokenStore.delete(token);
    return rt;
  }

  /////////////
  // Clients //
  /////////////
  hasClient(id: string): boolean {
    return this.clientStore.has(id);
  }

  getClient(id: string): Client {
    if (!this.clientStore.has(id)) {
      throw new ErrorClientUndefined('client does not exist');
    }
    return this.clientStore.get(id)!;
  }

  saveClient(
    id: string = createUuid(),
    secret: string,
    scope?: string[],
    trusted?: boolean,
  ): void {
    this.clientStore.set(id, new Client(id, secret, scope, trusted));
  }

  updateClient(
    id: string,
    secret?: string,
    scope?: string[],
    trusted?: boolean,
  ): void {
    const client = this.getClient(id);
    if (secret) {
      client.updateSecret(secret);
    }
    if (scope) {
      client.scope = scope;
    }
    if (trusted) {
      client.trusted = trusted;
    }
    this.clientStore.set(client.id, client);
  }

  deleteClient(id: string): Client {
    const client = this.getClient(id);
    this.clientStore.delete(id);
    return client;
  }

  //////////
  // User //
  //////////
  hasUser(id: string): boolean {
    return this.userStore.has(id);
  }

  getUser(id: string): User {
    if (!this.userStore.has(id)) {
      throw new ErrorUserUndefined('user does not exist');
    }
    return this.userStore.get(id)!;
  }

  findUserByUsername(username: string): User {
    const values = Array.from(this.userStore.values());
    if (values.findIndex((v) => v.username == username) == -1) {
      throw new ErrorUserUndefined('user does not exist');
    }
    return values.find((v) => v.username == username)!;
  }

  saveUser(
    id: string = createUuid(),
    username: string,
    password: string,
    scope?: string[],
    trusted?: boolean,
  ): void {
    this.userStore.set(id, new User(id, username, password, scope, trusted));
  }

  updateUser(
    username: string,
    password?: string,
    scope?: string[],
    trusted?: boolean,
  ): void {
    const user = this.findUserByUsername(username);
    if (password) {
      user.updatePassword(password);
    }
    if (scope) {
      user.scope = scope;
    }
    if (trusted) {
      user.trusted = trusted;
    }
    this.userStore.set(user.id, user);
  }

  deleteUser(id: string): User {
    const user = this.getUser(id);
    this.userStore.delete(id);
    return user;
  }
}

export default OAuth2Store;
export { AuthorizationCode, AccessToken, Client, User };
OAuth2.ts
import passport from 'passport';
import Logger from '@matrixai/logger';
import * as utils from './utils';
import * as config from './Config';
import oauth2orize from 'oauth2orize';
import Validation from './Validation';
import OAuth2Store from './OAuth2Store';

class OAuth2 {
  store: OAuth2Store;
  private server: oauth2orize.OAuth2Server;
  private validation: Validation;
  private expiresIn = { expires_in: config.token.expiresIn };
  private logger: Logger;

  constructor(publicKey: string, privateKey: string, logger: Logger) {
    this.store = new OAuth2Store(publicKey, privateKey);
    this.server = oauth2orize.createServer();
    this.validation = new Validation(this.store);
    this.logger = logger;

    /**
     * Exchange client credentials for access tokens.
     */
    this.server.exchange(
      oauth2orize.exchange.clientCredentials(async (client, scope, done) => {
        try {
          const token = utils.createToken(
            this.store.privateKey,
            config.token.expiresIn,
            client.id,
          );
          const expiration = config.token.calculateExpirationDate();
          // Pass in a null for user id since there is no user when using this grant type
          const user = this.store.findUserByUsername(client.id);
          const accessToken = this.store.saveAccessToken(
            token,
            expiration,
            user.id,
            client.id,
            scope,
          );
          done(null, accessToken.token, undefined, this.expiresIn);
        } catch (error) {
          done(error, false);
        }
      }),
    );

    this.server.serializeClient((client, done) => {
      done(null, client.id);
    });

    this.server.deserializeClient((id, done) => {
      try {
        const client = this.store.getClient(id);
        done(null, client);
      } catch (error) {
        done(error, null);
      }
    });
  }

  tokenInfo(req, res) {
    try {
      const accessToken = this.validation.tokenForHttp(req.query.access_token);
      this.validation.tokenExistsForHttp(accessToken);
      const client = this.store.getClient(accessToken.clientId!);
      this.validation.clientExistsForHttp(client);
      const expirationLeft = Math.floor(
        (accessToken.expiration.getTime() - Date.now()) / 1000,
      );
      res.status(200).json({ audience: client.id, expires_in: expirationLeft });
    } catch (error) {
      this.logger.error(error.toString());
      res.status(500).json({ error: error.message });
    }
  }

  revokeToken(req, res) {
    try {
      let accessToken = this.validation.tokenForHttp(req.query.token);
      accessToken = this.store.deleteAccessToken(accessToken.token);
      if (!accessToken) {
        accessToken = this.store.deleteRefreshToken(req.query.token);
      }
      this.validation.tokenExistsForHttp(accessToken);
      res.status(200).json({});
    } catch (error) {
      res.status(500).json({ error: error.message });
    }
  }

  public get token() {
    return [
      passport.authenticate(['clientBasic', 'clientPassword'], {
        session: true,
      }),
      this.server.token(),
      this.server.errorHandler(),
    ];
  }
}

export default OAuth2;
utils.ts
import jwt from 'jsonwebtoken';
import { v4 as uuid } from 'uuid';

function createUuid(): string {
  return uuid();
}

function createToken(privateKey: string, expiry = 3600, subject = ''): string {
  const token = jwt.sign(
    {
      jti: createUuid(),
      subject,
      exp: Math.floor(Date.now() / 1000) + expiry,
    },
    privateKey,
    {
      algorithm: 'RS256',
    },
  );

  return token;
}

function verifyToken(token: string, publicKey: string) {
  return jwt.verify(token, publicKey);
}

export { createUuid, createToken, verifyToken };
Validation.ts
import * as utils from './utils';
import * as config from './Config';
import OAuth2Store, { AuthorizationCode, Client } from './OAuth2Store';
import {
  ErrorClientUndefined,
  ErrorInvalidCredentials,
  ErrorInvalidToken,
  ErrorUserUndefined,
} from '../../errors';

class Validation {
  store: OAuth2Store;
  constructor(store: OAuth2Store) {
    this.store = store;
  }

  user(user, password) {
    this.userExists(user);
    if (user.password !== password) {
      throw new ErrorInvalidCredentials('User password does not match');
    }
    return user;
  }

  userExists(user) {
    if (user == null) {
      throw new ErrorUserUndefined('User does not exist');
    }
    return user;
  }

  clientExists(client) {
    if (client == null) {
      throw new ErrorClientUndefined('Client does not exist');
    }
    return client;
  }

  refreshToken(token, refreshToken, client) {
    utils.verifyToken(refreshToken, this.store.publicKey);
    if (client.id !== token.clientID) {
      throw new ErrorInvalidCredentials(
        'RefreshToken clientID does not match client id given',
      );
    }
    return token;
  }

  authCode(
    code: string,
    authCode: AuthorizationCode,
    client: Client,
    redirectURI: string,
  ) {
    utils.verifyToken(code, this.store.publicKey);
    if (client.id !== authCode.clientId) {
      throw new ErrorInvalidCredentials(
        'AuthCode clientID does not match client id given',
      );
    }
    if (redirectURI !== authCode.redirectURI) {
      throw new ErrorInvalidCredentials(
        'AuthCode redirectURI does not match redirectURI given',
      );
    }
    return authCode;
  }

  isRefreshToken(authCode: AuthorizationCode) {
    return authCode != null && authCode.scope.indexOf('offline_access') === 0;
  }

  generateRefreshToken(authCode: AuthorizationCode) {
    const refreshToken = utils.createToken(
      this.store.privateKey,
      config.refreshToken.expiresIn,
      authCode.userId,
    );
    const expiration = config.token.calculateExpirationDate();
    return this.store.saveRefreshToken(
      refreshToken,
      expiration,
      authCode.clientId,
      authCode.userId,
      authCode.scope,
    ).token;
  }

  generateToken(authCode: AuthorizationCode) {
    const token = utils.createToken(
      this.store.privateKey,
      config.token.expiresIn,
      authCode.userId,
    );
    const expiration = config.token.calculateExpirationDate();
    return this.store.saveAccessToken(
      token,
      expiration,
      authCode.userId,
      authCode.clientId,
      authCode.scope,
    ).token;
  }

  generateTokens(authCode: AuthorizationCode) {
    if (this.isRefreshToken(authCode)) {
      return Promise.all([
        this.generateToken(authCode),
        this.generateRefreshToken(authCode),
      ]);
    }
    return Promise.all([this.generateToken(authCode)]);
  }

  tokenForHttp(token: string) {
    try {
      utils.verifyToken(token, this.store.publicKey);
    } catch (error) {
      throw new ErrorInvalidToken('invalid_token');
    }
    let accessToken;
    accessToken = this.store.getAccessToken(token);
    if (!accessToken) {
      accessToken = this.store.getRefreshToken(token);
    }
    if (!accessToken) {
      throw new ErrorInvalidToken('token not found');
    }
    return accessToken;
  }

  /**
   * Given a token this will return the token if it is not null. Otherwise this will throw a
   * HTTP error.
   * @param   {Object} token - The token to check
   * @throws  {Error}  If the client is null
   * @returns {Object} The client if it is a valid client
   */
  tokenExistsForHttp(token) {
    if (!token) {
      throw new ErrorInvalidToken('invalid_token');
    }
    return token;
  }

  /**
   * Given a client this will return the client if it is not null. Otherwise this will throw a
   * HTTP error.
   * @param   {Object} client - The client to check
   * @throws  {Error}  If the client is null
   * @returns {Object} The client if it is a valid client
   */
  clientExistsForHttp(client) {
    if (!client) {
      throw new ErrorInvalidToken('invalid_token');
    }
    return client;
  }
}

export default Validation;
openapi.yaml
openapi: 3.0.2
info:
  title: Polykey API
  description: Peer to peer distributed secret sharing. HTTP API.
  version: 0.1.9
tags:
  - name: "ca"
    description: "Certificate authority operations"
  - name: "vaults"
    description: "Vault operations"
  - name: "secrets"
    description: "Secret Operations"
paths:
  /ca/root_certificate:
    get:
      tags:
        - "ca"
      summary: Returns the root certificate
      description: Returns the root certificate for the polykey node
      operationId: rootCertificate
      security:
        - bearerAuth: []
        - OAuth2-Client: []
      responses:
        "200":
          description: Root certificate
          content:
            text/plain:
              schema:
                type: string
        "401":
          description: Not authenticated
        "500":
          description: Internal server error
  /ca/certificate_chain:
    get:
      tags:
        - "ca"
      summary: Returns the certificate chain for verifying the root certificate
      description: Returns the certificate chain for the polykey node
      operationId: certificateChain
      security:
        - bearerAuth: []
        - OAuth2-Client: []
      responses:
        "200":
          description: Certificate Chain
          content:
            text/plain:
              schema:
                type: array
                items:
                  type: string
        "401":
          description: Not authenticated
        "500":
          description: Internal server error
  /ca/certificate_signing_request:
    post:
      tags:
        - "ca"
      summary: Request a signed certificate
      description: Request a certificate from the polykey node CA
      operationId: certificateSigningRequest
      security:
        - bearerAuth: [admin, request_certificate]
        - OAuth2-Client: [admin, request_certificate]
      requestBody:
        content:
          text/plain:
            schema:
              type: string
      responses:
        "200":
          description: Signed certificate
          content:
            text/plain:
              schema:
                type: string
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
  /vaults:
    get:
      tags:
        - "vaults"
      summary: List all vaults
      description: Returns a list of all vaults in the node
      operationId: vaultsList
      security:
        - bearerAuth: [admin, write_vaults, read_vaults]
        - OAuth2-Client: [admin, write_vaults, read_vaults]
      responses:
        "200":
          description: Vault List
          content:
            text/plain:
              schema:
                type: array
                items:
                  type: string
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
  "/vaults/{vaultName}":
    parameters:
      - name: vaultName
        description: Name of vault
        in: path
        required: true
        schema:
          type: string
    get:
      tags:
        - "secrets"
      summary: List secrets
      description: List all secrets in the vault named `vaultName`
      operationId: secretsList
      security:
        - bearerAuth: [admin, write_secrets, read_secrets]
        - OAuth2-Client: [admin, write_secrets, read_secrets]
      responses:
        "200":
          description: Secret List
          content:
            text/plain:
              schema:
                type: array
                items:
                  type: string
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
    post:
      tags:
        - "vaults"
      summary: Create a new vault
      description: Create a new vault named `vaultName`
      operationId: vaultsNew
      security:
        - bearerAuth: [admin, write_vaults]
        - OAuth2-Client: [admin, write_vaults]
      responses:
        "200":
          description: Vault was created successfully
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
    delete:
      tags:
        - "vaults"
      summary: Delete an existing vault
      description: Delete an existing vault called `vaultName`
      operationId: vaultsDelete
      security:
        - bearerAuth: [admin, write_vaults]
        - OAuth2-Client: [admin, write_vaults]
      responses:
        "200":
          description: Vault was deleted successfully
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
  "/secrets/{vaultName}/{secretName}":
    parameters:
      - name: vaultName
        description: Name of vault that contains the secret to be retrieved
        in: path
        required: true
        schema:
          type: string
      - name: secretName
        description: Name of secret to be retrieved
        in: path
        required: true
        schema:
          type: string
    get:
      tags:
        - "secrets"
      summary: Retrieve a secret
      description: Returns the secret `secretName` located in vault `vaultName`
      operationId: secretsGet
      security:
        - bearerAuth: [admin, write_secrets, read_secrets]
        - OAuth2-Client: [admin, write_secrets, read_secrets]
      responses:
        "200":
          description: Secret Content
          content:
            text/plain:
              schema:
                type: string
            application/octet-stream:
              schema:
                type: string
                format: binary
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
    post:
      tags:
        - "secrets"
      summary: Create a new secret
      description: Create a new secret within a specific vault
      operationId: secretsNew
      security:
        - bearerAuth: [admin, write_secrets]
        - OAuth2-Client: [admin, write_secrets]
      requestBody:
        description: Secret content
        content:
          text/plain:
            schema:
              type: string
          application/octet-stream:
            schema:
              type: string
              format: binary
      responses:
        "200":
          description: Secret was created successfully
          content: {}
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
    delete:
      tags:
        - "secrets"
      summary: Delete an existing secret
      description: Delete a new secret within a specific vault
      operationId: secretsDelete
      security:
        - bearerAuth: [admin, write_secrets]
        - OAuth2-Client: [admin, write_secrets]
      responses:
        "200":
          description: Secret was deleted successfully
        "401":
          description: Not authenticated
        "403":
          description: Access token does not have the required scope
        "500":
          description: Internal server error
components:
  securitySchemes:
    bearerAuth:
      type: http
      scheme: bearer
    OAuth2-Client:
      type: oauth2
      flows:
        clientCredentials:
          tokenUrl: /oauth/token
          refreshUrl: /oauth/refresh
          scopes:
            admin: Grants read and write access to both vaults and secrets
            request_certificate: Grants access to request a CA certificate
            write_vaults: "Grants delete, create and read access to vaults"
            read_vaults: Grants read access to vaults
            write_secrets: "Grants delete, create and read access to secrets"
            read_secrets: Grants read access to secrets

@CMCDragonkai
Copy link
Member

Some extra thoughts on this after reading: https://news.ycombinator.com/item?id=28295348 and https://fly.io/blog/api-tokens-a-tedious-survey/.

  • The tokens that we use to authenticate our CLI and GUI session should be the same as the tokens we are attempting for our HTTP API. That is the session manager and sessions domain is the starting point of this epic. Any development should evolve from that. Session Management Commands #204 Automatic session refresh and expiry information & Sessions Domain Refactoring According to Review #211
  • There may be an issue with using JWT due to its kid parameter. Investigate this. We are currently using JWT for the session tokens, the notification messages and sigchain claims. The latter 2 aren't used for authentication, so it shouldn't apply here. But the need for JWT to be encoded and thus not human-readable is always an issue.
  • Tokens are client-stored. They are not kept track of on the server side. Revocation is only possible globally by resetting the session key (this acts sort of like a version, if we assume a single PK agent is a single user). A similar issue can occur for these HTTP tokens. Alternatively a whitelist of tokens can kept stored as a traditional session DB. Or a blacklist of tokens can be stored similar to a certificate revocation list.
  • Opening up PK to HTTP clients and third party users makes PK a multi-user system. Thus PK as a representative of a gestalt identity in a federated trust network goes 2nd order, while the CLI and GUI were designed from a single-user pov, this leads invariably to a multi-user pov, at least in terms of using the PK agent and its resources on behalf of the owner. In OAuth2 terms, there's only 1 resource owner and 1 resource-server/authorisation-server in PK, and many possible clients. Multi-user systems do require more fine-grained revocation abilities.
  • With respect to OAuth2, other than our integration into identity providers, nobody signs into PK relying on a third party resource server. However there is a potential to sign into other client applications relying on PK as the resource server. We imagine that usually API tokens should be created directly in the API, CLI, GUI. But it is also possible that client applications want access to PK resources, and prompts their users to sign in via PK. The sign-in process for PK is just the standard session unlocking protocol, we can take inspiration from webauthn https://webauthn.me/. This implies however an HTTP redirection endpoint and a corresponding web-interface and public accessibility of the PK agent (and a fixed domain). I reckon this will probably be a very rare usecase, but it's worth considering how hashicorp vault expects client applications to acquire vault tokens in order to interact with vault.
  • Vault/File schema - the existence of different kinds of API tokens means we have even more possible schema structures to support sophisticated secrets management for development workflow. Vault and File Schema - Ingress and Egress Schemas #222.
  • Gestalt sync - creating an HTTP API token for one keynode means that other keynodes in the same gestalt can recognise that the token is valid within the gestalt. What does this mean? Can this be used to access resources across all keynodes in the same gestalt?
  • Single sign-on (understood to be a gestalt), federated identity - Extending on the idea of signing-in to a client application via PK, it would mean a sort of federated identity system, since PK keynodes are part of a greater gestalt.

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Aug 25, 2021

It's important for all services to make use of TLS when connecting to PK. This is true for PK to PK, CLI/GUI to PK, and third parties to PK.

However only PK to PK is expected to be based on mtls, whereas CLI/GUI to PK is just tls. Third parties the same as well. The main reason for this is that you need a PKI system to bootstrap mtls.

To have mtls before mtls, is a chicken or egg problem. Thus bootstrapping an mtls-based inter-service architecture requires a PKI that itself doesn't require mtls to connect to to start working on. Bootstrapping mtls is a wiki use-case problem, and it's part of us exposing more cryptographic operations to the end user which will require solving: #155

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Aug 25, 2021

Storage and management of session tokens and eventually these HTTP API tokens are currently done in directly on our DB. It does not make use of our vault/EFS abstraction.

If we want dog-food our own creation and secrets management system, does it not make sense to reuse the vaults for storing the session tokens, if we expect users to use the vault system to manage their inter-service API tokens? It would be a strong test of our user experience. Bonus points for making PK use PK (a self-hosting secrets management system, uses its own secrets management system to manage its own secrets) thus a higher-order PK.

The current limitation is the lack of schemas that make vaults an arbitrary file system. Plus we haven't fully settled on the vault API. So these issues will be relevant and all relevant vault-related APIs:

This would enable all of the usecase workflows for our own session tokens. For example:

@scottmmorris

@CMCDragonkai
Copy link
Member

Related to #235 in having a GRPC-web gateway, there are also projects that enable a GRPC-gateway to HTTP Restful API.

For PK GUI usage, the main importance is to avoid bridging via electron IPC, and exposing rich streaming features to the front end, we don't actually want to use a raw HTTP API. This means going down one of 2 routes:

  • GRPC-web proxy which allows the usage of GRPC directly on the FE client, and with websockets, this enables all streaming capabilities
  • GraphQL to HTTP to GRPC - this would spend an extra innovation token on graphql, but could mean we are automating alot of the complexity of FE fetching and caching, I'm leaning towards leaving this for the MatrixOS-Desktop iteration, as there's alot to learn with respect to GraphQL

For mobile clients, I imagine there will be fiddling required for GRPC as well, and possibly using GRPC-web as well.

@CMCDragonkai
Copy link
Member

If we change to using JSON RPC over HTTP1.1 & HTTP2, I believe the oauth situation will likely be alot more simpler to implement. And as a "open" decentralized system, it would be easier for third party, and end-user systems to integrate over the network to a polykey node.

GRPC seems to be focused on being an "internal" microservice protocol (where it is expected that a centralised entity (company) controls the entire infrastructure), it's not really something that "external" users can easily integrate into.

@CMCDragonkai
Copy link
Member

CMCDragonkai commented May 17, 2022

In relation to blockchain identities #352 and the DID work https://www.w3.org/TR/did-core/ in W3C, I want to point out that the oauth2 protocol and associated openid connect protocol is also related to this overall concept of "federated identity" https://en.wikipedia.org/wiki/Federated_identity.

Additional details: https://medium.com/amber-group/decentralized-identity-passport-to-web3-d3373479268a

Our gestalt system #190, represents a decentralised way of connecting up identity systems.

Now we can research and figure out how to position Polykey relative to all these different identity systems and protocols are being developed currently. But there's something here related to our original idea of "secrets as capabilities".

Recently I discovered that Gitlab CI/CD system supports using OIDC connect protocol as a way to bootstrap trust between the gitlab runner, the CI/CD job, and a secrets provider (like hashicorp vault or aws STS #71), and then allows the CI/CD job to request secrets that are scoped to the job (such as accessing some AWS S3 resource).

Documented:

This is basically an implementation of the principle of least privilege. Consider that the CI/CD job is being "privilege separated" by not being given "ambient authority" due to environment variables, and it is "privilege bracketed" because its secret-capabilities are only useful for that specific job, and once the job is done, it experiences "privilege revocation" either by having the token expire, cryptographically-one-time-use through a nonce, or through some state change in the resource controller.

AWS recommends against storing "long-term credentials", time expiry is a naive way of implementing privilege bracketing, since it is technically only bracketed by the time-factor/dimension and the "space dimension" by where it is given.

One of the useful things here is that the secret provider is able to change or revocate the secret after it is given to the user of that secret. This is basically a proxy-capability, which is how capsec theorises how to revocate capabilities.

This basically validates the idea that capabilities used in a distributed network context ultimately requires a "serialisation" format. And the serialisation of a capability is ultimately some sort of secret token. Using smart cryptographic logic, the embedding of "logic" into the token itself via JWT (structured token), you're able to build in "smarts" into the token that can function as a "capability". You might as well call this a "smart token" as opposed to dumb tokens where they are just a random identifier that is checked for equality in some session database.

This is very exciting, we are on the cusp of building this into Polykey. But it does require us to first figure out the trust bootstrap (gestalt system) and how it integrates into decentralised identifiers and OIDC identity federation, and then making usage of structured smart tokens (like JWTs) to enable logic on the tokens.

Now back to the topic of OIDC identity provider. AWS describes their "web identity federation" as:

IAM OIDC identity providers are entities in IAM that describe an external identity provider (IdP) service that supports the OpenID Connect (OIDC) standard, such as Google or Salesforce. You use an IAM OIDC identity provider when you want to establish trust between an OIDC-compatible IdP and your AWS account. This is useful when creating a mobile app or web application that requires access to AWS resources, but you don't want to create custom sign-in code or manage your own user identities.

This could describe any existing identity system that wants to allow external systems to interact with identities. Even things like "login with GitHub" is basically allowing a third party system to interact with identities on GitHub, and delegate the responsibility of managing identities to GitHub. But the depth of what it means to interact of identities goes deeper than just SSO. And this is what PK addresses.

And to be clear, Open ID Connect is OAuth 2.0 with extra features. So it's basically saying AWS supports OAuth2, login with AWS, and then use AWS resources in third party apps, but you can also do this with software agents, it doesn't have to be human-people.

@CMCDragonkai
Copy link
Member

CMCDragonkai commented May 18, 2022

Here's an interesting way of explaining this whole idea of "smart tokens". It was my journey through this.

Many years ago we start with CapSec - capability based security. It started some controversy and criticisms came in.

Then came the Capability Myths Demolished paper: https://blog.acolyer.org/2016/02/16/capability-myths-demolished/. It criticised the criticism. Also see: https://news.ycombinator.com/item?id=22753464. The paper itself is quite readable.

Then came c2 wiki: https://wiki.c2.com/?CapabilitySecurityModel.

In it, it explained that the decentralized world of the internet ultimately lacks a "central authority" (i.e. the kernel in a operating system) that forges and is the origin of the capabilities http://wiki.c2.com/?PowerBox, and thus one must be transferring "tokens" as reified capabilities that can be sent between decentralized programs.

The problem is that our tokens are just dumb strings that have to be matched for equality in some database. They didn't really satisfy alot of the cool things you can do in a capability operating system. And you could say they were easy to understand, and so everybody ended up creating their own ad-hoc session system without fully understanding all the implications and what it could be capable of.

So slowly we realised that these dumb tokens can become smarter by embedding information into the token. Like the HMAC protocol.

Further development led to the "macaroon" idea: https://github.com/nitram509/macaroons.js. A cookie.

Today we can instead talk about JWT, and I believe JWT has taken over. https://neilmadden.blog/2020/07/29/least-privilege-with-less-effort-macaroon-access-tokens-in-am-7-0/

However how JWTs can be used is still a point of R&D. And how JWTs could be used as "smart tokens" that realise the ideals of capsec across decentralised services is still being developed. I believe we can rebrand capabilities as "smart tokens" and it would have the cultural cachet of "smart contracts".

It'd be interesting to see how these smart tokens can be computed against, and how these smart tokens enable ABAC, and most importantly revocability which requires indirection (https://news.ycombinator.com/item?id=22755068).

@CMCDragonkai
Copy link
Member

CMCDragonkai commented May 25, 2022

Here's a real world example of this problem.

Both GitHub and GitLab support webhooks. Webhooks are a great way of allowing web-services to provide "push" based configuration mechanism.

Right now we have GitLab mirrors pulling GitHub, and it does it by polling GitHub. One of the advantages of push based configuration is the ability to avoid polling and to have minimal delay so that events arrive faster. A sort of "best-effort delivery" and reliable delivery.

So to make GitHub push to GitLab, we can configure a webhook on GitHub.

This requires GitLab to have an API that supports a trigger to pull on a project mirror.

This API call is here: https://docs.gitlab.com/ee/api/projects.html#start-the-pull-mirroring-process-for-a-project

GitHub has a great webhook panel for debugging webhooks.

image

But the problem now is setting secrets.

Apparently there is standard for secret passing on webhooks defined by the WebSub protocol, but not all providers support it. At any case, Gitlab suggests using the ?private_token=... query parameter.

The resulting web hook looks like:

https://gitlab.com/api/v4/projects/<PROJECTID>/mirror/pull?private_token=<PERSONAL_ACCESS_TOKEN>

Now you need an access token.

The problem is that this token creation has to be done on GitLab, and during the creation of the token, you need to grant it privileges, obviously we don't want to give GitHub ENTIRE access to our API.

  1. The docs don't explain what kind of privileges the call requires.
  2. Upon trial and error it is revealed that the token must be at Maintainer level, and you need the full api scope

image

This privilege requirement is TOO MUCH. The token isn't really safe, it's just sitting in plain text in the webhook UI settings of the project. The token is at maintainer level thus being capable of deleting projects... etc, and finally it's the entire api, rather than limited to a specific API call.

The ambient authority of this token is too much for something that creates a minor benefit (making our mirrors faster), and for something that is handled this insecurely.

Thus making this integration too "risky" to implement.

Therefore having "smart tokens" would reduce the marginal risk of implementing these sorts of things so we can benefit from more secure integrations. One could argue that integrations today are limited by the inflexibility of privilege passing systems.

@CMCDragonkai
Copy link
Member

Example of the world moving towards passwordless authentication (Apple's keychain supporting it): https://news.ycombinator.com/item?id=31643917. Thus relying on an "authenticator" application, and not just a OTP style authenticator, but basically the holder of your identity sort of thing.

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Jun 17, 2022

Recently github had a token breach issue (which is a clear example of secret management problem usecase):

We're writing to let you know that between 2022-02-25 18:28 UTC and 2022-03-02 20:47 UTC, due to a bug, GitHub Apps were able to generate new scoped installation tokens with elevated permissions. You are an owner of an organization on GitHub with GitHub Apps installed that generated at least one new token during this time period. While we do not have evidence that this bug was maliciously exploited, with our available data, we are not able to determine if a token was generated with elevated permissions.

A vulnerability for about 1 week between 25th of Feb to 2nd of March.

GitHub learned via a customer support ticket that GitHub Apps were able to generate scoped installation tokens with elevated permissions. Each of these tokens are valid for up to 1 hour.

GitHub quickly fixed the issue and established that this bug was recently introduced, existing for approximately 5 days between 2022-02-25 18:28 UTC and 2022-03-02 20:47 UTC.

These tokens are used by third party apps when you want to use them with GitHub.

GitHub Apps generate scoped installation tokens based on the scopes and permissions granted when the GitHub App is installed into a user account or organization. For example, if a GitHub App requests and is granted read permission to issues during installation, the scoped installation token the App generates to function would have issues:read permission.

This bug would potentially allow the above installation to generate a token with issues:write, an elevation of permission from the granted issues:read permission. The bug did not allow a GitHub App to generate a token with additional scopes that were not already granted, such as discussions:read in the above example. The bug did not allow a GitHub App to access any repositories that the App did not already have access to.

So basically, apps would request a token with issues:read permission/capability. But some how, the apps would be able to acquire a token that had issues:write capability instead. If customers only saw issues:read, they basically granted an app with extra capabilities then they intended. It was of course an "elevation" of a capability, but not "extra" scopes as they would say it, because permissions are hierarchically organised, in terms of "scope" and and the elevation of a permission of the scope itself.

In order to exploit this bug, the GitHub App author would need to modify their app's code to request elevated permissions on generated tokens.

I'm guessing this is part of the "third party" oauth flow. Where third party apps can be installed on the platform in this case GitHub, say they want only issues:read but end up acquiring issues:write instead.

GitHub immediately began working to fix the bug and started an investigation into the potential impact. However due to the scale and complexity of GitHub Apps and their short-lived tokens, we were unable to determine whether this bug was ever exploited.

Of course due to a lack of auditing of token usage, there's no way to tell if this elevated token was ever used, or if any apps took advantage of these elevated permissions.

We are notifying all organization and user account owners that had GitHub Apps installed and had a scoped installation token generated during the bug window so that they can stay informed and perform their own audit of their installed GitHub Apps.

As a followup to this investigation, GitHub is looking at ways to improve our logging to enable more in-depth analysis on scoped token generation and GitHub App permissions in the future.

Exactly more auditing of smart tokens.

<Reference # GH-0003281-4728-1>

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Jun 20, 2022

In summary, GitHub's problem is 2 problems:

  1. A protocol/logic bug affecting privilege elevation
  2. The lack of auditing/logging of secret token usage with respect to the permissions

Can a software solve problem 1 and 2?

In terms of 2, yes a software can definitely solve that, but not traditional logging software. Traditional logging software actually results in secret leaks because people forget to wipe passwords/tokens from logs.

Secret-usage auditing/logging requires a purpose-built logging software designed to audit secret usage in particular. It's a far more complex workload than just non-secret logging.

As for problem 1, the idea that a software can solve this problem is a little bit more nebulous. This is because the problem is an intersection of different interacting systems, and all parts of the system have to coded securely for the entire protocol to work securely. One cannot say that the application of a piece of software will eliminate these protocol bugs, because it depends on the correctness of the entire system, not just the sub-pieces. Security is inherently something that is cross-functional and intersectional between machine to machine and human to machine.

But we understand that software can be 2 things: Framework or Library.

A library itself cannot solve this intersectionality problem.

However a framework can provide the structure so that the intersectionality performs/behaves according to a logically/securely-verified contract.

This reminds me of Ory and Kratos.

Ory Kratos is a fully customizable, API-only platform for login, two-factor authentication, social sign in, passwordless flows, registration, account recovery, email / phone verification, secure credentials, identity and user management.
Ory Hydra is an API-only, "headless," OAuth 2.0 and OpenID Connect provider that can interface with any identity and user management system, such as Ory Kratos, Firebase, your PHP app, LDAP, SAML, and others.
Ory Oathkeeper is a zero trust networking proxy and sidecar for popular Ingress services and API gateways. It checks if incoming network request are authenticated, and allowed to perform the requested action.
Ory Keto is the world's first and leading open source implementation of Google's Zanzibar research paper, an infinitely scalable and blazing fast authorization and permissioning service - RBAC on globally distributed steroids.

These software systems originally confused me because they did not explain to me as a developer that these are "frameworks". They are not "libraries" that are just plug-and-play.

If you want to sell a "framework", it's always going to be harder, because the customer has to conform their mental model and intersectionality to the framework. And that is a costly endeavour as it may be incompatible with their existing structure. You just can't easily bolt on a framework (compared to a library) after your software is already developed.

Libraries are inherently easier to just integrate, because they are self-contained. But self-contained systems cannot solve problems like security intersectionality.

Therefore frameworks work best when the client/customer has chosen it from the very beginning, the tradeoff is that if the framework turns out to be incorrect or faulty, it's always going to be alot more difficult to move away from it, because the framework structure imposes deep path dependency on software development, and the larger your software becomes, the more the framework is deeply embedded.

This is why frameworks are chosen after they have broad community/industry acceptance, because the more widely deployeda framework is, the more likey the framework has arrived at a generalisable non-leaky abstraction. On the other hand, this could also be because the framework just gains hacks after hacks due to legacy, and it ends up working only for the lowest commmon denominator or becomes a giant ball of complexity that handles every edgecase.

To summarise, in application to PK. If PK wants to solve these 2 problems, it would need to act like a framework.


I wonder if there's a generic systems engineering terminology for the difference between library-like systems and framework-like systems.

@CMCDragonkai CMCDragonkai changed the title HTTP API and OAuth2 Provider API for PK Agents for Third Party Integration Third Party Integration - HTTP API, OAuth2 Provider API, Plugin API, Library vs Framework, Smart Tokens Jun 20, 2022
@CMCDragonkai CMCDragonkai added the research Requires research label Jun 20, 2022
@CMCDragonkai
Copy link
Member

CMCDragonkai commented Jun 20, 2022

This issue has become a more generic research/design issue now, as I'm exploring the intersection of a few ideas. There's a few more axes to consider:

  1. Client Side vs Server Side vs Orchestration Side - where does this software solution sit with respect to all the sides of an interaction
  2. Decentralised Systems vs Distributed Systems - centralisation/decentralisation is about control, distributed systems is about availability
  3. Customers are ultimately separated between Who Users vs Who Pays - Who has to operate, and who benefits from its operation. In some cases the Who is the same person, however this is not always the case and this has impacts on product development.

Client Side vs Server Side vs Orchestration Side

Some competing solutions are firmly on the server side. Most SAAS software in this space are usually server side. For example Hashicorp's Vault and Ory solutions above are all server side solutions.

Password managers are generally client side solutions, the users of the password manager are not the web services, but the users of those web services.

There are also orchestration side solutions, these are govern the interaction between client side and server side acting like middlemen. These are the hardest solutions to deploy because they require consent from both client and server side. However you can imagine inside a corporation, there can be a top-down plan to run an orchestration side solution internally between different sub-systems some acting as client and some acting as server. Examples include kubernetes and also hashicorp vault.

A client side solution needs to optimise for non-technical users and end-user platforms like mobile phones, browsers, desktops, laptops, human to machine communication and GUIs. Server side solutions needs to optimise for machine to machine communication, automation, availability, APIs... etc.

PK can be client side, and it can be server side, it could also be orchestration side. But are there tradeoffs we are making by trying to be all sides? Does it also make sense to cover all sides? Perhaps we should focus on a particular side as a beachhead/lodgement before tackling all the other sides.

Decentralised vs Distributed

Decentralised solutions implies a distributed system.

Distributed systems don't imply decentralised system.

  • Centralisation vs decentralisation is about control and ownership.
  • Distributed systems is about availability (referring to the "availability" in CAP)

A software can be distributed but not centralised.

Server side solutions are often distributed because of the demands of scale, look at Hashicorp Vault's recommendation for 3+ vault nodes. However they are often not decentralised, because server side solutions tend to be used by large actors who want centralised control.

Client side solutions are often non-distributed but benefit from decentralisation.

For decentralisation to benefit server side solutions, the users of the server side solution must trade away control in return for participation in shared contexts, where value (non-zero sum value) can be unlocked through absolute and comparative advantage. This is goal of free-market economics. See the transition from mercantilism to free-market/laissez-faire economics.

Centralisation will always exist, and so will decentralisation, it all depends on context. Even in a free market, corporations exist as little centralised fiefdoms.

Decentralised systems can be used in a centralised way. In terms of software complexity, any time software becomes distributed, and then becomes decentralised, it increases the order of complexity of implementation.

PK is a decentralised and distributed system. However there are tradeoffs in its distributed system mechanics that currently make it less appealing for server side solutions. We need to investigate this.

Who Uses vs Who Pays

Users are not always the same people as who pays for the software solution. This is particularly true when building for centralised systems. Centralisation primarily means that there will be a separation of duties between owners and operators. This is what creates the moral hazard or perverse incentives that leads to class struggle.

This impacts software solution incentives, because one would naively think that software should be written for the users. But if the funding for software development come from the payers, then the features are written to benefit the payers, and not the users. When payers and users are aligned (or the same people), then there's no conflict. But when they are separated, then there will be conflict.

Users can never be neglected however, because because if the gap between the owners and operators increases, eventually the internal contradiction grows to the point where the perceived benefit of a solution becomes divorced from the real benefits of the solution, such that the solution will be replaced by something that closes the gap better.

When selling to centralised institutions we would have to be aware of this gap, while focusing on decentralisation we would have less misaligned incentives.

@CMCDragonkai
Copy link
Member

Investigated graphql, it's still not sufficient for what we need. Our RPC mechanism needs to support arbitrary streaming, and graphql just doesn't have any client side streaming or bidirectional streaming capabilities.

I was thinking that at the end of all of it. We could just have a plain old stream connection that we send and receive JSON messages over (or protobuf messages), and just make use of any stream based connection.

At the bottom webrtc data channel, wireguard or web transport can then be used to provide that reliable p2p stream, and whatever it does, must be capable of datagram punch packets. It should also provide some sort of muxing and demuxing, so that it's possible to create independent streams over the same connection. (As we do right now with grpc streams).

Another thought I had is that alot of these connections appear to be client-server. Then for any bidirectionality it has to be done on top again. If we get rid of all the complexity, and start from a foundation of a bidirectional connection, then it should be possible to both sides to create arbitrary streams on top that enable bidirectional RPC.

That would simplify the network architecture as it means any connection from Node 1 to Node 2 would enable Node 1 to call Node 2 and Node 2 to call Node 1.

@CMCDragonkai
Copy link
Member

CMCDragonkai commented Jul 16, 2022

Our own RPC can still work, all we need is a defined routing system, and then an encoding of messages like JSON RPC.

Stream manipulation needs to be multiplexed on top of a connection.

I'd define 3 layers of APIs:

  • Connection API - create 1 connection to another node, that's it, it does all the establishment work and returns a connection object
  • Stream API - using the connection object, one can "open" a stream, one can also open multiple streams, streams are bidirectional and reliable
  • RPC API - all RPC calls are available over the any given stream, a "unary" call is simply a request and response stream

So the final layer RPC is built on top of the streams. It will be 1 to 1 from RPC to stream. A unary call is a function that creates a stream, sends 1 message, and gets back 1 message.

A streaming call is a function that creates a stream, sends N messages, and receives N messages.

In a way this is what GRPC was supposed to do, but it's specifying too much about the connection API, and it should "connection" agnostic. At the same time a connection is always client to server, the server has to open their own connection back to the client to send calls, but this should be unnecessary...

Maybe a better grpc https://github.com/deeplay-io/nice-grpc - then we swap out the underlying socket for a reliable socket provided by webtransport or webrtc.

On the topic of nodejs, originally electron process could call nodejs APIs directly without needing to send data to the nodejs process. This is now considered insecure. But I believe that this can be used securely, one just needs to be aware about cross site scripting and not use the "web browser", or the secure bridge. This means grpc can be used in nodejs without a problem. At the same time... it makes sense to eventually reduce the reliance on node-specific APIs too.

@CMCDragonkai
Copy link
Member

Websockets opens up PK client service to browser integration and anything else that supports websockets.

Most external libraries still expect HTTP based APIs though, and I think websocket based APIs is a bit rare.

HTTP API can support unary, client streaming, server streaming and duplex, except that it must always occur from client then server.

So unary is fine.

Client streaming means client can send multiple messages and get back 1 response.

Server streaming means client sends 1 request, then can get back multiple messages streamed.

Duplex means the client must send all, then receive all.

This means there are edge cases where the HTTP API won't be able to perform, which is anything that is interleaved IO between client and server.

Where we enable a websocket server we can also produce a http server, all we have to do is present a stream object to the RPC system, but it will need to place some constraints on the semantics.

@tegefaulkes already in the case of RPC, the client side has to send at least 1 message before it attempt to await for a response from the server side, and this is because the routing and triggering of the handler only occurs upon the first message. This constraint should be embedded into the system by making our stream throw an exception if you attempt to await for a response from the server before first sending something to the stream.

This is a RPC issue, so it should be added into the middleware stack, you could call it a sort of "traffic light" system.

@tegefaulkes
Copy link
Contributor

I'm not sure how we could tell if the user is waiting for an output before sending anything. I guess we'd have to use a proxy for that? But do we really want to check for that?

It's possible I could await output before writing anything and not actually block any writes. I've used this in a test where I use an async arrow function to create a promise that resolves when the reading ends. Then I start writing to the stream. In this case the output is being awaited before any messages are being sent and it's a valid thing to do.

I don't really think we need something here to prevent a possible deadlock. I'm not sure there's a clean way to address it. Deadlocks like this can happen a bunch of ways in our code and we already deal with it without problems.

@CMCDragonkai
Copy link
Member

It is possible to hook into a write, you just need to know when the first write has occurred and flip a boolean. This is doable on the stream callbacks.

@tegefaulkes
Copy link
Contributor

Sure, but it's still possible to attempt a read before writing by using promises. The following will attempt a read before writing but not result in a dead lock.

const consume = (async () =< {
  for await (const _ of readable) {
    // do nothing, only consume stream
  }
})()

//do writes here
await writer.write('something');
await consume;

@CMCDragonkai
Copy link
Member

I see, so you're saying that in some cases, it can be legitimate for something to start reading, while something else concurrently writes. In that case, yea it doesn't make sense to throw an exception on read if a write hasn't been done... It is still a potential footgun though. Could result in promise deadlock if one isn't aware of this.

@CMCDragonkai CMCDragonkai added r&d:polykey:core activity 1 Secret Vault Sharing and Secret History Management and removed epic Big issue with multiple subissues r&d:polykey:core activity 3 Peer to Peer Federated Hierarchy labels Aug 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
design Requires design r&d:polykey:core activity 1 Secret Vault Sharing and Secret History Management research Requires research
Development

No branches or pull requests

3 participants