diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 000000000..e28e69132 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,45 @@ +--- +name: Bug report +about: Create a report to help us improve +title: '' +labels: '' +assignees: '' + +--- + +**Describe the bug** +A clear and concise description of what the bug is. + +**Where did it happen?** +Did the issue occur on CryptPad.fr or an instance hosted by a third-party? +If on another instance, please provide its full URL. + +**To Reproduce** +Steps to reproduce the behavior: +1. Go to '...' +2. Click on '....' +3. Scroll down to '....' +4. See error + +**Expected behavior** +A clear and concise description of what you expected to happen. + +**Screenshots** +If applicable, add screenshots to help explain your problem. + +**Browser (please complete the following information):** + - OS: [e.g. iOS] + - Browser [e.g. firefox, tor browser, chrome, safari, brave, edge, ???] + - variations [e.g. Firefox nightly, Firefox ESR, Chromium, Ungoogled chrome] + - Version [e.g. 22] + - Extensions installed (UBlock Origin, Passbolt, LibreJS] + - Browser tweaks [e.g. firefox "Enhanced Tracking Protection" strict/custom mode, tor browser "safer" security level, chrome incognito mode] + +**Smartphone (please complete the following information):** + - Device: [e.g. iPhone6] + - OS: [e.g. iOS8.1] + - Browser [e.g. stock browser, safari] + - Version [e.g. 22] + +**Additional context** +Add any other context about the problem here. diff --git a/.gitignore b/.gitignore index d96f6e6ac..50796e9bb 100644 --- a/.gitignore +++ b/.gitignore @@ -20,4 +20,4 @@ block/ logs/ privileged.conf config/config.js - +*yolo.sh diff --git a/CHANGELOG.md b/CHANGELOG.md index 295112727..84b726ced 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,57 @@ +# NorthernWhiteRhino release (3.13.0) + +## Goals + +This release cycle we prioritized the completion of "access lists", a major feature that we're excited to introduce. + +## Update notes + +Nearly every week (sometimes more than once) we end up taking time away from development to help administrators to configure their CryptPad instances. We're happy to see more instances popping up, but ideally we'd like to spend more of our time working on new features. With this in mind we devoted some time to simplify instance configuration and to clarify some points where people commonly have difficulty. + +If you review `cryptpad/config.example.js` you'll notice it is significantly smaller than it was last release. +Old configuration files should be backwards compatible (if you copied `config.example.js` to `config.js` in order to customize it). +The example has been reorganized so that the most important parts (which people seemed to miss most of the time) are at the top. +Most of the fields which were defined within the config file now have defaults defined within the server itself. +If you supply these values they will override the default, but for the most part they can be removed. + +We advise that you read the comments at the top of the example, in particular the points related to `httpUnsafeOrigin` and `httpSafeOrigin` which are used to protect users' cryptographic keys in the event of a cross-site scripting (XSS) vulnerability. +If these values are not correctly set then your users will not benefit from all the security measures we've spent lots of time implemented. + +A lot of the fields that were present as modifiable defaults have been removed or commented out in the example config. +If you supply them then they will override the default behaviour, however, you probably won't need to and doing so might break important functionality. +Content-Security Policy (CSP) definitions should be safe to remove, as should `httpAddress`, `httpPort`, and `httpSafePort` (unless you need to run the nodejs API server on an address other than `localhost` or port 3000. + +Up until now it's been possible for administrators to allow users to pay for accounts (on their server) via https://accounts.cryptpad.fr. +Our intent was to securely handle payment and then split the proceeds between ourselves and the instance's administrator. +In practice this just created extra work for us because we ended up having to contact admins, all of whom have opted to treat the subscription as a donation to support development. +As such we have disabled the ability of users to pay for premium subscriptions (on https://accounts.cryptpad.fr) for any instance other than our own. + +Servers with premium subscriptions enabled were configured to check whether anyone had subscribed to a premium account by querying our accounts server on a daily basis. +We've left this daily check in place despite premium subscriptions being disabled because it informs us how many third-party instances exist and what versions they are running. +We don't sell or share this information with anyone, but it is useful to us because it informs us what older data structures we have to continue to support. +For instance, we retain code for migrating documents to newer data formats as long as we know that there are still instances that have not run those migrations. +We also cite the number of third-party instances when applying for grants as an indicator of the value of funding our project. +In any case, you can disable this daily check-in by setting `blockDailyCheck` to `true` in `config/config.js`. + +Finally, we've implemented the ability to set a higher limit on the maximum size of uploaded files for premium users (paying users on CryptPad.fr and users with entries in `customLimits` on other instances). +Set this limit as a number (of bytes) with `premiumUploadSize` in your config file. + +## Features + +* It is often difficult to fix problems reported as GitHub issues because we don't have enough information. The platform's repository now includes an _issue template_ which includes a list of details that will probably be relevant to fixing bugs. Please read the list carefully, as we'll probably just close issues if information that we need was not included. +* We've made it easy to terminate all open sessions for your account. If you're logged in, you'll now see a _log out everywhere_ button in the _user admin menu_ (in the top-right corner of the screen). + * You may still terminate only _remote sessions_ while leaving your local session intact via the pre-existing button on the settings page's _confidentiality_ tab. +* You may have noticed that it takes progressively longer to load your account as you add more files to your drive, shared folders, and teams. This is because an integrity check is run on all your files when you first launch a CryptPad session. We optimized some parts of this check to speed it up. We plan to continue searching for similar processes that we can optimize in order to decrease loading time and run-time efficiency. +* Lastly, this release introduces **access lists**, which you can use to limit who can view your documents _even if they have the keys required to decrypt them_. You can do so by using the _Access_ modal for any given document, available in the `...` dropdown menu in each app's toolbar or when right-clicking in the drive. + * Enabling access restriction for a document will disallow anyone except its owners or allowed users from opening it. Anyone else who is currently editing or viewing the document will be disconnected from the session. + +## Bug fixes + +* A member of _C3Wien_ reported some strange behaviour triggered by customizing some of Firefox's anti-tracking features. The settings incorrectly identified our cross-domain sandboxing system as a tracker and interfered with its normal functionality. As a result, the user was treated as though they were not logged in, even though pads from their account's drive were displayed within the "anonymous drive" that unregistered users normally see. + * This was simple to fix, requiring only that we adjust our method of checking whether a user is logged in. + * If you ever notice odd behaviour we do recommend that you review any customizations you've made to your browser, as we only test CryptPad under default conditions unless prompted to investigate an issue. +* Users that take advantage of the Mermaid renderer in our markdown editor's preview pane may have noticed that the preview's scroll position was lost whenever mermaid charts were modified. We've updated our renderer such that it preserves scroll position when redrawing elements, making it easier to see the effects of your changes when editing large charts. + # Megaloceros release (3.12.0) ## Goals diff --git a/config/config.example.js b/config/config.example.js index 25a4c97a6..273c196d2 100644 --- a/config/config.example.js +++ b/config/config.example.js @@ -1,67 +1,110 @@ -/* - globals module +/* globals module */ + +/* DISCLAIMER: + + There are two recommended methods of running a CryptPad instance: + + 1. Using a standalone nodejs server without HTTPS (suitable for local development) + 2. Using NGINX to serve static assets and to handle HTTPS for API server's websocket traffic + + We do not officially recommend or support Apache, Docker, Kubernetes, Traefik, or any other configuration. + Support requests for such setups should be directed to their authors. + + If you're having difficulty difficulty configuring your instance + we suggest that you join the project's IRC/Matrix channel. + + If you don't have any difficulty configuring your instance and you'd like to + support us for the work that went into making it pain-free we are quite happy + to accept donations via our opencollective page: https://opencollective.com/cryptpad + */ -var _domain = 'http://localhost:3000/'; - -// You can `kill -USR2` the node process and it will write out a heap dump. -// If your system doesn't support dumping, comment this out and install with -// `npm install --production` -// See: https://strongloop.github.io/strongloop.com/strongblog/how-to-heap-snapshots/ - -// to enable this feature, uncomment the line below: -// require('heapdump'); - -// we prepend a space because every usage expects it -// requiring admins to preserve it is unnecessarily confusing -var domain = ' ' + _domain; - -// Content-Security-Policy -var baseCSP = [ - "default-src 'none'", - "style-src 'unsafe-inline' 'self' " + domain, - "font-src 'self' data:" + domain, - - /* child-src is used to restrict iframes to a set of allowed domains. - * connect-src is used to restrict what domains can connect to the websocket. - * - * it is recommended that you configure these fields to match the - * domain which will serve your CryptPad instance. - */ - "child-src blob: *", - // IE/Edge - "frame-src blob: *", - - /* this allows connections over secure or insecure websockets - if you are deploying to production, you'll probably want to remove - the ws://* directive, and change '*' to your domain - */ - "connect-src 'self' ws: wss: blob:" + domain, - - // data: is used by codemirror - "img-src 'self' data: blob:" + domain, - "media-src * blob:", - - // for accounts.cryptpad.fr authentication and cross-domain iframe sandbox - "frame-ancestors *", - "" -]; - - module.exports = { +/* CryptPad is designed to serve its content over two domains. + * Account passwords and cryptographic content is handled on the 'main' domain, + * while the user interface is loaded on a 'sandbox' domain + * which can only access information which the main domain willingly shares. + * + * In the event of an XSS vulnerability in the UI (that's bad) + * this system prevents attackers from gaining access to your account (that's good). + * + * Most problems with new instances are related to this system blocking access + * because of incorrectly configured sandboxes. If you only see a white screen + * when you try to load CryptPad, this is probably the cause. + * + * PLEASE READ THE FOLLOWING COMMENTS CAREFULLY. + * + */ + +/* httpUnsafeOrigin is the URL that clients will enter to load your instance. + * Any other URL that somehow points to your instance is supposed to be blocked. + * The default provided below assumes you are loading CryptPad from a server + * which is running on the same machine, using port 3000. + * + * In a production instance this should be available ONLY over HTTPS + * using the default port for HTTPS (443) ie. https://cryptpad.fr + * In such a case this should be handled by NGINX, as documented in + * cryptpad/docs/example.nginx.conf (see the $main_domain variable) + * + */ + httpUnsafeOrigin: 'http://localhost:3000/', + +/* httpSafeOrigin is the URL that is used for the 'sandbox' described above. + * If you're testing or developing with CryptPad on your local machine then + * it is appropriate to leave this blank. The default behaviour is to serve + * the main domain over port 3000 and to serve the content over port 3001. + * + * This is not appropriate in a production environment where invasive networks + * may filter traffic going over abnormal ports. + * To correctly configure your production instance you must provide a URL + * with a different domain (a subdomain is sufficient). + * It will be used to load the UI in our 'sandbox' system. + * + * This value corresponds to the $sandbox_domain variable + * in the example nginx file. + * + * CUSTOMIZE AND UNCOMMENT THIS FOR PRODUCTION INSTALLATIONS. + */ + // httpSafeOrigin: "https://some-other-domain.xyz", + +/* httpAddress specifies the address on which the nodejs server + * should be accessible. By default it will listen on 127.0.0.1 + * (IPv4 localhost on most systems). If you want it to listen on + * all addresses, including IPv6, set this to '::'. + * + */ + //httpAddress: '::', + +/* httpPort specifies on which port the nodejs server should listen. + * By default it will serve content over port 3000, which is suitable + * for both local development and for use with the provided nginx example, + * which will proxy websocket traffic to your node server. + * + */ + //httpPort: 3000, + +/* httpSafePort allows you to specify an alternative port from which + * the node process should serve sandboxed assets. The default value is + * that of your httpPort + 1. You probably don't need to change this. + * + */ + //httpSafePort: 3001, + /* ===================== * Admin * ===================== */ /* - * CryptPad now contains an administration panel. Its access is restricted to specific + * CryptPad contains an administration panel. Its access is restricted to specific * users using the following list. * To give access to the admin panel to a user account, just add their user id, * which can be found on the settings page for registered users. * Entries should be strings separated by a comma. */ +/* adminKeys: [ //"https://my.awesome.website/user/#/1/cryptpad-user1/YZgXQxKR0Rcb6r6CmxHPdAGLVludrAF2lEnkbx1vVOo=", ], +*/ /* CryptPad's administration panel includes a "support" tab * wherein administrators with a secret key can view messages @@ -76,159 +119,55 @@ module.exports = { */ // supportMailboxPublicKey: "", - /* ===================== - * Infra setup - * ===================== */ - - // the address you want to bind to, :: means all ipv4 and ipv6 addresses - // this may not work on all operating systems - httpAddress: '::', - - // the port on which your httpd will listen - httpPort: 3000, - - // This is for allowing the cross-domain iframe to function when developing - httpSafePort: 3001, - - // This is for deployment in production, CryptPad uses a separate origin (domain) to host the - // cross-domain iframe. It can simply host the same content as CryptPad. - // httpSafeOrigin: "https://some-other-domain.xyz", - - httpUnsafeOrigin: domain, - - /* Your CryptPad server will share this value with clients - * via its /api/config endpoint. + /* We're very proud that CryptPad is available to the public as free software! + * We do, however, still need to pay our bills as we develop the platform. * - * If you want to host your API and asset servers on different hosts - * specify a URL for your API server websocket endpoint, like so: - * wss://api.yourdomain.com/cryptpad_websocket + * By default CryptPad will prompt users to consider donating to + * our OpenCollective campaign. We publish the state of our finances periodically + * so you can decide for yourself whether our expenses are reasonable. * - * Otherwise, leave this commented and your clients will use the default - * websocket (wss://yourdomain.com/cryptpad_websocket) + * You can disable any solicitations for donations by setting 'removeDonateButton' to true, + * but we'd appreciate it if you didn't! */ - //externalWebsocketURL: 'wss://api.yourdomain.com/cryptpad_websocket + //removeDonateButton: false, - /* CryptPad can be configured to send customized HTTP Headers - * These settings may vary widely depending on your needs - * Examples are provided below - */ - httpHeaders: { - "X-XSS-Protection": "1; mode=block", - "X-Content-Type-Options": "nosniff", - "Access-Control-Allow-Origin": "*" - }, - - contentSecurity: baseCSP.join('; ') + - "script-src 'self'" + domain, - - // CKEditor and OnlyOffice require significantly more lax content security policy in order to function. - padContentSecurity: baseCSP.join('; ') + - "script-src 'self' 'unsafe-eval' 'unsafe-inline'" + domain, - - /* Main pages - * add exceptions to the router so that we can access /privacy.html - * and other odd pages - */ - mainPages: [ - 'index', - 'privacy', - 'terms', - 'about', - 'contact', - 'what-is-cryptpad', - 'features', - 'faq', - 'maintenance' - ], - - /* ===================== - * Subscriptions - * ===================== */ - - /* Limits, Donations, Subscriptions and Contact - * - * By default, CryptPad limits every registered user to 50MB of storage. It also shows a - * subscribe button which allows them to upgrade to a paid account. We handle payment, - * and keep 50% of the proceeds to fund ongoing development. - * - * You can: - * A: leave things as they are - * B: disable accounts but display a donate button - * C: hide any reference to paid accounts or donation - * - * If you chose A then there's nothing to do. - * If you chose B, set 'allowSubscriptions' to false. - * If you chose C, set 'removeDonateButton' to true - */ - allowSubscriptions: true, - removeDonateButton: false, - - /* - * By default, CryptPad also contacts our accounts server once a day to check for changes in - * the people who have accounts. This check-in will also send the version of your CryptPad - * instance and your email so we can reach you if we are aware of a serious problem. We will - * never sell it or send you marketing mail. If you want to block this check-in and remain - * completely invisible, set this and allowSubscriptions both to false. + /* CryptPad will display a point of contact for your instance on its contact page + * (/contact.html) if you provide it below. */ adminEmail: 'i.did.not.read.my.config@cryptpad.fr', - /* Sales coming from your server will be identified by your domain + /* + * By default, CryptPad contacts one of our servers once a day. + * This check-in will also send some very basic information about your instance including its + * version and the adminEmail so we can reach you if we are aware of a serious problem. + * We will never sell it or send you marketing mail. * - * If you are using CryptPad in a business context, please consider taking a support contract - * by contacting sales@cryptpad.fr + * If you want to block this check-in and remain set 'blockDailyCheck' to true. */ - myDomain: _domain, + //blockDailyCheck: false, /* - * If you are using CryptPad internally and you want to increase the per-user storage limit, - * change the following value. + * By default users get 50MB of storage by registering on an instance. + * You can set this value to whatever you want. * - * Please note: This limit is what makes people subscribe and what pays for CryptPad - * development. Running a public instance that provides a "better deal" than cryptpad.fr - * is effectively using the project against itself. + * hint: 50MB is 50 * 1024 * 1024 */ - defaultStorageLimit: 50 * 1024 * 1024, + //defaultStorageLimit: 50 * 1024 * 1024, - /* - * CryptPad allows administrators to give custom limits to their friends. - * add an entry for each friend, identified by their user id, - * which can be found on the settings page. Include a 'limit' (number of bytes), - * a 'plan' (string), and a 'note' (string). - * - * hint: 1GB is 1024 * 1024 * 1024 bytes - */ - customLimits: { - /* - "https://my.awesome.website/user/#/1/cryptpad-user1/YZgXQxKR0Rcb6r6CmxHPdAGLVludrAF2lEnkbx1vVOo=": { - limit: 20 * 1024 * 1024 * 1024, - plan: 'insider', - note: 'storage space donated by my.awesome.website' - }, - "https://my.awesome.website/user/#/1/cryptpad-user2/GdflkgdlkjeworijfkldfsdflkjeEAsdlEnkbx1vVOo=": { - limit: 10 * 1024 * 1024 * 1024, - plan: 'insider', - note: 'storage space donated by my.awesome.website' - } - */ - }, /* ===================== * STORAGE * ===================== */ - /* By default the CryptPad server will run scheduled tasks every five minutes - * If you want to run scheduled tasks in a separate process (like a crontab) - * you can disable this behaviour by setting the following value to true - */ - disableIntegratedTasks: false, - /* Pads that are not 'pinned' by any registered user can be set to expire * after a configurable number of days of inactivity (default 90 days). * The value can be changed or set to false to remove expiration. * Expired pads can then be removed using a cron job calling the - * `delete-inactive.js` script with node + * `evict-inactive.js` script with node + * + * defaults to 90 days if nothing is provided */ - inactiveTime: 90, // days + //inactiveTime: 90, // days /* CryptPad archives some data instead of deleting it outright. * This archived data still takes up space and so you'll probably still want to @@ -241,31 +180,46 @@ module.exports = { * deletion. Set this value to the number of days you'd like to retain * archived data before it's removed permanently. * + * defaults to 15 days if nothing is provided */ - archiveRetentionTime: 15, + //archiveRetentionTime: 15, /* Max Upload Size (bytes) * this sets the maximum size of any one file uploaded to the server. * anything larger than this size will be rejected + * defaults to 20MB if no value is provided */ - maxUploadSize: 20 * 1024 * 1024, + //maxUploadSize: 20 * 1024 * 1024, - // XXX - premiumUploadSize: 100 * 1024 * 1024, - - /* ===================== - * HARDWARE RELATED - * ===================== */ - - /* CryptPad's file storage adaptor closes unused files after a configurable - * number of milliseconds (default 30000 (30 seconds)) + /* + * CryptPad allows administrators to give custom limits to their friends. + * add an entry for each friend, identified by their user id, + * which can be found on the settings page. Include a 'limit' (number of bytes), + * a 'plan' (string), and a 'note' (string). + * + * hint: 1GB is 1024 * 1024 * 1024 bytes */ - channelExpirationMs: 30000, +/* + customLimits: { + "https://my.awesome.website/user/#/1/cryptpad-user1/YZgXQxKR0Rcb6r6CmxHPdAGLVludrAF2lEnkbx1vVOo=": { + limit: 20 * 1024 * 1024 * 1024, + plan: 'insider', + note: 'storage space donated by my.awesome.website' + }, + "https://my.awesome.website/user/#/1/cryptpad-user2/GdflkgdlkjeworijfkldfsdflkjeEAsdlEnkbx1vVOo=": { + limit: 10 * 1024 * 1024 * 1024, + plan: 'insider', + note: 'storage space donated by my.awesome.website' + } + }, +*/ - /* CryptPad's file storage adaptor is limited by the number of open files. - * When the adaptor reaches openFileLimit, it will clean up older files + /* Users with premium accounts (those with a plan included in their customLimit) + * can benefit from an increased upload size limit. By default they are restricted to the same + * upload size as any other registered user. + * */ - openFileLimit: 2048, + //premiumUploadSize: 100 * 1024 * 1024, /* ===================== * DATABASE VOLUMES diff --git a/customize.dist/pages.js b/customize.dist/pages.js index 3c4b2b09e..117ddac87 100644 --- a/customize.dist/pages.js +++ b/customize.dist/pages.js @@ -107,7 +107,7 @@ define([ ])*/ ]) ]), - h('div.cp-version-footer', "CryptPad v3.12.0 (Megaloceros)") + h('div.cp-version-footer', "CryptPad v3.13.0 (NorthernWhiteRhino)") ]); }; diff --git a/customize.dist/src/less2/include/alertify.less b/customize.dist/src/less2/include/alertify.less index f668f1735..cd6a65853 100644 --- a/customize.dist/src/less2/include/alertify.less +++ b/customize.dist/src/less2/include/alertify.less @@ -434,7 +434,7 @@ width: 50px; margin: 0; min-width: 0; - font-size: 18px; + font-size: 18px !important; } } } diff --git a/customize.dist/src/less2/include/toolbar.less b/customize.dist/src/less2/include/toolbar.less index 91b246465..3c59218f4 100644 --- a/customize.dist/src/less2/include/toolbar.less +++ b/customize.dist/src/less2/include/toolbar.less @@ -1159,6 +1159,11 @@ margin-left: 11px; } } + &.fa-unlock-alt { + .cp-toolbar-drawer-element { + margin-left: 15px; + } + } &.fa-question { .cp-toolbar-drawer-element { margin-left: 16px; diff --git a/lib/commands/channel.js b/lib/commands/channel.js index da4fb6685..10131d9d8 100644 --- a/lib/commands/channel.js +++ b/lib/commands/channel.js @@ -5,6 +5,7 @@ const Util = require("../common-util"); const nThen = require("nthen"); const Core = require("./core"); const Metadata = require("./metadata"); +const HK = require("../hk-util"); Channel.clearOwnedChannel = function (Env, safeKey, channelId, cb, Server) { if (typeof(channelId) !== 'string' || channelId.length !== 32) { @@ -203,6 +204,7 @@ Channel.isNewChannel = function (Env, channel, cb) { if (!Core.isValidId(channel)) { return void cb('INVALID_CHAN'); } if (channel.length !== 32) { return void cb('INVALID_CHAN'); } + // TODO replace with readMessagesBin var done = false; Env.msgStore.getMessages(channel, function (msg) { if (done) { return; } @@ -228,7 +230,9 @@ Channel.isNewChannel = function (Env, channel, cb) { Otherwise behaves the same as sending to a channel */ -Channel.writePrivateMessage = function (Env, args, cb, Server) { +Channel.writePrivateMessage = function (Env, args, _cb, Server, netfluxId) { + var cb = Util.once(Util.mkAsync(_cb)); + var channelId = args[0]; var msg = args[1]; @@ -246,31 +250,52 @@ Channel.writePrivateMessage = function (Env, args, cb, Server) { return void cb("NOT_IMPLEMENTED"); } - // historyKeeper expects something with an 'id' attribute - // it will fail unless you provide it, but it doesn't need anything else - var channelStruct = { - id: channelId, - }; + nThen(function (w) { + Metadata.getMetadataRaw(Env, channelId, w(function (err, metadata) { + if (err) { + w.abort(); + Env.Log.error('HK_WRITE_PRIVATE_MESSAGE', err); + return void cb('METADATA_ERR'); + } - // construct a message to store and broadcast - var fullMessage = [ - 0, // idk - null, // normally the netflux id, null isn't rejected, and it distinguishes messages written in this way - "MSG", // indicate that this is a MSG - channelId, // channel id - msg // the actual message content. Generally a string - ]; + if (!metadata || !metadata.restricted) { + return; + } - // XXX RESTRICT respect allow lists + var session = HK.getNetfluxSession(Env, netfluxId); + var allowed = HK.listAllowedUsers(metadata); - // historyKeeper already knows how to handle metadata and message validation, so we just pass it off here - // if the message isn't valid it won't be stored. - Env.historyKeeper.channelMessage(Server, channelStruct, fullMessage); + if (HK.isUserSessionAllowed(allowed, session)) { return; } - Server.getChannelUserList(channelId).forEach(function (userId) { - Server.send(userId, fullMessage); + w.abort(); + cb('INSUFFICIENT_PERMISSIONS'); + })); + }).nThen(function () { + // historyKeeper expects something with an 'id' attribute + // it will fail unless you provide it, but it doesn't need anything else + var channelStruct = { + id: channelId, + }; + + // construct a message to store and broadcast + var fullMessage = [ + 0, // idk + null, // normally the netflux id, null isn't rejected, and it distinguishes messages written in this way + "MSG", // indicate that this is a MSG + channelId, // channel id + msg // the actual message content. Generally a string + ]; + + + // historyKeeper already knows how to handle metadata and message validation, so we just pass it off here + // if the message isn't valid it won't be stored. + Env.historyKeeper.channelMessage(Server, channelStruct, fullMessage); + + Server.getChannelUserList(channelId).forEach(function (userId) { + Server.send(userId, fullMessage); + }); + + cb(); }); - - cb(); }; diff --git a/lib/commands/metadata.js b/lib/commands/metadata.js index a5bca0dca..802942fcb 100644 --- a/lib/commands/metadata.js +++ b/lib/commands/metadata.js @@ -46,6 +46,7 @@ Data.getMetadata = function (Env, channel, cb, Server, netfluxId) { return void cb(void 0, { restricted: metadata.restricted, allowed: allowed, + rejected: true, }); } cb(void 0, metadata); @@ -139,16 +140,10 @@ Data.setMetadata = function (Env, safeKey, data, cb, Server) { next(); const metadata_cache = Env.metadata_cache; - const channel_cache = Env.channel_cache; // update the cached metadata metadata_cache[channel] = metadata; - // as well as the metadata that's attached to the index... - // XXX determine if we actually need this... - var index = Util.find(channel_cache, [channel, 'index']); - if (index && typeof(index) === 'object') { index.metadata = metadata; } - // it's easy to check if the channel is restricted const isRestricted = metadata.restricted; // and these values will be used in any case diff --git a/lib/commands/pin-rpc.js b/lib/commands/pin-rpc.js index bd6852a67..2888f1e61 100644 --- a/lib/commands/pin-rpc.js +++ b/lib/commands/pin-rpc.js @@ -174,6 +174,7 @@ var loadUserPins = function (Env, safeKey, cb) { }); // if channels aren't in memory. load them from disk + // TODO replace with readMessagesBin Env.pinStore.getMessages(safeKey, lineHandler, function () { // no more messages diff --git a/lib/commands/quota.js b/lib/commands/quota.js index 74c4eca44..9e1c631d9 100644 --- a/lib/commands/quota.js +++ b/lib/commands/quota.js @@ -35,11 +35,8 @@ Quota.applyCustomLimits = function (Env) { }; Quota.updateCachedLimits = function (Env, cb) { - if (Env.adminEmail === false) { - Quota.applyCustomLimits(Env); - if (Env.allowSubscriptions === false) { return; } - throw new Error("allowSubscriptions must be false if adminEmail is false"); - } + Quota.applyCustomLimits(Env); + if (Env.allowSubscriptions === false || Env.blockDailyCheck === true) { return void cb(); } var body = JSON.stringify({ domain: Env.myDomain, @@ -81,8 +78,8 @@ Quota.updateCachedLimits = function (Env, cb) { req.on('error', function (e) { Quota.applyCustomLimits(Env); - // FIXME this is always falsey. Maybe we just suppress errors? - if (!Env.domain) { return cb(); } + if (!Env.myDomain) { return cb(); } + // only return an error if your server allows subscriptions cb(e); }); diff --git a/lib/defaults.js b/lib/defaults.js new file mode 100644 index 000000000..7119a0c6a --- /dev/null +++ b/lib/defaults.js @@ -0,0 +1,86 @@ +var Default = module.exports; + +Default.commonCSP = function (domain) { + domain = ' ' + domain; + // Content-Security-Policy + + return [ + "default-src 'none'", + "style-src 'unsafe-inline' 'self' " + domain, + "font-src 'self' data:" + domain, + + /* child-src is used to restrict iframes to a set of allowed domains. + * connect-src is used to restrict what domains can connect to the websocket. + * + * it is recommended that you configure these fields to match the + * domain which will serve your CryptPad instance. + */ + "child-src blob: *", + // IE/Edge + "frame-src blob: *", + + /* this allows connections over secure or insecure websockets + if you are deploying to production, you'll probably want to remove + the ws://* directive, and change '*' to your domain + */ + "connect-src 'self' ws: wss: blob:" + domain, + + // data: is used by codemirror + "img-src 'self' data: blob:" + domain, + "media-src * blob:", + + // for accounts.cryptpad.fr authentication and cross-domain iframe sandbox + "frame-ancestors *", + "" + ]; +}; + +Default.contentSecurity = function (domain) { + return (Default.commonCSP(domain).join('; ') + "script-src 'self' resource: " + domain).replace(/\s+/g, ' '); +}; + +Default.padContentSecurity = function (domain) { + return (Default.commonCSP(domain).join('; ') + "script-src 'self' 'unsafe-eval' 'unsafe-inline' resource: " + domain).replace(/\s+/g, ' '); +}; + +Default.httpHeaders = function () { + return { + "X-XSS-Protection": "1; mode=block", + "X-Content-Type-Options": "nosniff", + "Access-Control-Allow-Origin": "*" + }; +}; + +Default.mainPages = function () { + return [ + 'index', + 'privacy', + 'terms', + 'about', + 'contact', + 'what-is-cryptpad', + 'features', + 'faq', + 'maintenance' + ]; +}; + +/* By default the CryptPad server will run scheduled tasks every five minutes + * If you want to run scheduled tasks in a separate process (like a crontab) + * you can disable this behaviour by setting the following value to true + */ + //disableIntegratedTasks: false, + + /* CryptPad's file storage adaptor closes unused files after a configurable + * number of milliseconds (default 30000 (30 seconds)) + */ +// channelExpirationMs: 30000, + + /* CryptPad's file storage adaptor is limited by the number of open files. + * When the adaptor reaches openFileLimit, it will clean up older files + */ + //openFileLimit: 2048, + + + + diff --git a/lib/historyKeeper.js b/lib/historyKeeper.js index ed67602bd..fcd291414 100644 --- a/lib/historyKeeper.js +++ b/lib/historyKeeper.js @@ -65,10 +65,12 @@ module.exports.create = function (config, cb) { WARN: WARN, flushCache: config.flushCache, adminEmail: config.adminEmail, - allowSubscriptions: config.allowSubscriptions, + allowSubscriptions: config.allowSubscriptions === true, + blockDailyCheck: config.blockDailyCheck === true, + myDomain: config.myDomain, - mySubdomain: config.mySubdomain, - customLimits: config.customLimits, + mySubdomain: config.mySubdomain, // only exists for the accounts integration + customLimits: config.customLimits || {}, // FIXME this attribute isn't in the default conf // but it is referenced in Quota domain: config.domain diff --git a/lib/hk-util.js b/lib/hk-util.js index 41d305172..7545c2f56 100644 --- a/lib/hk-util.js +++ b/lib/hk-util.js @@ -227,7 +227,6 @@ const computeIndex = function (Env, channelName, cb) { const cpIndex = []; let messageBuf = []; - let metadata; let i = 0; const CB = Util.once(cb); @@ -235,14 +234,9 @@ const computeIndex = function (Env, channelName, cb) { const offsetByHash = {}; let size = 0; nThen(function (w) { - getMetadata(Env, channelName, w(function (err, _metadata) { - //if (err) { console.log(err); } - metadata = _metadata; - })); - }).nThen(function (w) { // iterate over all messages in the channel log // old channels can contain metadata as the first message of the log - // remember metadata the first time you encounter it + // skip over metadata as that is handled elsewhere // otherwise index important messages in the log store.readMessagesBin(channelName, 0, (msgObj, readMore) => { let msg; @@ -303,7 +297,7 @@ const computeIndex = function (Env, channelName, cb) { cpIndex: sliceCpIndex(cpIndex, i), offsetByHash: offsetByHash, size: size, - metadata: metadata, + //metadata: metadata, line: i }); }); @@ -613,11 +607,40 @@ const handleRPC = function (Env, Server, seq, userId, parsed) { } }; +/* + This is called when a user tries to connect to a channel that doesn't exist. + we initialize that channel by writing the metadata supplied by the user to its log. + if the provided metadata has an expire time then we also create a task to expire it. +*/ +const handleFirstMessage = function (Env, channelName, metadata) { + Env.store.writeMetadata(channelName, JSON.stringify(metadata), function (err) { + if (err) { + // FIXME tell the user that there was a channel error? + return void Env.Log.error('HK_WRITE_METADATA', { + channel: channelName, + error: err, + }); + } + }); + + // write tasks + if(metadata.expire && typeof(metadata.expire) === 'number') { + // the fun part... + // the user has said they want this pad to expire at some point + Env.tasks.write(metadata.expire, "EXPIRE", [ channelName ], function (err) { + if (err) { + // if there is an error, we don't want to crash the whole server... + // just log it, and if there's a problem you'll be able to fix it + // at a later date with the provided information + Env.Log.error('HK_CREATE_EXPIRE_TASK', err); + Env.Log.info('HK_INVALID_EXPIRE_TASK', JSON.stringify([metadata.expire, 'EXPIRE', channelName])); + } + }); + } +}; + const handleGetHistory = function (Env, Server, seq, userId, parsed) { - const store = Env.store; - const tasks = Env.tasks; const metadata_cache = Env.metadata_cache; - const channel_cache = Env.channel_cache; const HISTORY_KEEPER_ID = Env.id; const Log = Env.Log; @@ -656,30 +679,33 @@ const handleGetHistory = function (Env, Server, seq, userId, parsed) { nThen(function (waitFor) { var w = waitFor(); - - /* unless this is a young channel, we will serve all messages from an offset - this will not include the channel metadata, so we need to explicitly fetch that. - unfortunately, we can't just serve it blindly, since then young channels will - send the metadata twice, so let's do a quick check of what we're going to serve... + /* fetch the channel's metadata. + use it to check if the channel has expired. + send it to the client if it exists. */ - getIndex(Env, channelName, waitFor((err, index) => { - /* if there's an error here, it should be encountered - and handled by the next nThen block. - so, let's just fall through... - */ - if (err) { return w(); } - + getMetadata(Env, channelName, waitFor(function (err, metadata) { + if (err) { + Env.Log.error('HK_GET_HISTORY_METADATA', { + channel: channelName, + error: err, + }); + return void w(); + } + if (!metadata || !metadata.channel) { return w(); } + // if there is already a metadata log then use it instead + // of whatever the user supplied // it's possible that the channel doesn't have metadata // but in that case there's no point in checking if the channel expired // or in trying to send metadata, so just skip this block - if (!index || !index.metadata) { return void w(); } + if (!metadata) { return void w(); } + // And then check if the channel is expired. If it is, send the error and abort // FIXME this is hard to read because 'checkExpired' has side effects if (checkExpired(Env, Server, channelName)) { return void waitFor.abort(); } // always send metadata with GET_HISTORY requests - Server.send(userId, [0, HISTORY_KEEPER_ID, 'MSG', userId, JSON.stringify(index.metadata)], w); + Server.send(userId, [0, HISTORY_KEEPER_ID, 'MSG', userId, JSON.stringify(metadata)], w); })); }).nThen(() => { let msgCount = 0; @@ -699,45 +725,8 @@ const handleGetHistory = function (Env, Server, seq, userId, parsed) { return; } - const chan = channel_cache[channelName]; - if (msgCount === 0 && !metadata_cache[channelName] && Server.channelContainsUser(channelName, userId)) { - metadata_cache[channelName] = metadata; - - // the index will have already been constructed and cached at this point - // but it will not have detected any metadata because it hasn't been written yet - // this means that the cache starts off as invalid, so we have to correct it - if (chan && chan.index) { chan.index.metadata = metadata; } - - // new channels will always have their metadata written to a dedicated metadata log - // but any lines after the first which are not amendments in a particular format will be ignored. - // Thus we should be safe from race conditions here if just write metadata to the log as below... - // TODO validate this logic - // otherwise maybe we need to check that the metadata log is empty as well - store.writeMetadata(channelName, JSON.stringify(metadata), function (err) { - if (err) { - // FIXME tell the user that there was a channel error? - return void Log.error('HK_WRITE_METADATA', { - channel: channelName, - error: err, - }); - } - }); - - // write tasks - if(metadata.expire && typeof(metadata.expire) === 'number') { - // the fun part... - // the user has said they want this pad to expire at some point - tasks.write(metadata.expire, "EXPIRE", [ channelName ], function (err) { - if (err) { - // if there is an error, we don't want to crash the whole server... - // just log it, and if there's a problem you'll be able to fix it - // at a later date with the provided information - Log.error('HK_CREATE_EXPIRE_TASK', err); - Log.info('HK_INVALID_EXPIRE_TASK', JSON.stringify([metadata.expire, 'EXPIRE', channelName])); - } - }); - } + handleFirstMessage(Env, channelName, metadata); Server.send(userId, [0, HISTORY_KEEPER_ID, 'MSG', userId, JSON.stringify(metadata)]); } @@ -834,6 +823,7 @@ const directMessageCommands = { */ HK.onDirectMessage = function (Env, Server, seq, userId, json) { const Log = Env.Log; + const HISTORY_KEEPER_ID = Env.id; Log.silly('HK_MESSAGE', json); let parsed; @@ -858,7 +848,7 @@ HK.onDirectMessage = function (Env, Server, seq, userId, json) { // to stop people from loading history they shouldn't see. var channelName = parsed[1]; nThen(function (w) { - HK.getMetadata(Env, channelName, w(function (err, metadata) { + getMetadata(Env, channelName, w(function (err, metadata) { if (err) { // stream errors? // we should log these, but if we can't load metadata @@ -891,10 +881,27 @@ HK.onDirectMessage = function (Env, Server, seq, userId, json) { return; } - // XXX NOT ALLOWED - // respond to txid with error as in handleGetHistory - // send the allow list anyway, it might not get used currently - // but will in the future +/* Anyone in the userlist that isn't in the allow list should have already + been kicked out of the channel. Likewise, disallowed users should not + be able to add themselves to the userlist because JOIN commands respect + access control settings. The error that is sent below protects against + the remaining case, in which users try to get history without having + joined the channel. Normally we'd send the allow list to tell them the + key with which they should authenticate, but since we don't use this + behaviour, I'm doing the easy thing and just telling them to GO AWAY. + + We can implement the more advanced behaviour later if it turns out that + we need it. This command validates guards against all kinds of history + access: GET_HISTORY, GET_HISTORY_RANGE, GET_FULL_HISTORY. +*/ + + w.abort(); + return void Server.send(userId, [ + seq, + 'ERROR', + 'ERESTRICTED', + HISTORY_KEEPER_ID + ]); })); }).nThen(function () { // run the appropriate command from the map @@ -937,46 +944,37 @@ HK.onChannelMessage = function (Env, Server, channel, msgStruct) { let metadata; nThen(function (w) { - // getIndex (and therefore the latest metadata) - getIndex(Env, channel.id, w(function (err, index) { - if (err) { - w.abort(); - return void Log.error('CHANNEL_MESSAGE_ERROR', err); - } - - if (!index.metadata) { - // if there's no channel metadata then it can't be an expiring channel - // nor can we possibly validate it - return; - } - - metadata = index.metadata; + getMetadata(Env, channel.id, w(function (err, _metadata) { + // if there's no channel metadata then it can't be an expiring channel + // nor can we possibly validate it + if (!_metadata) { return; } + metadata = _metadata; // don't write messages to expired channels if (checkExpired(Env, Server, channel)) { return void w.abort(); } - - // if there's no validateKey present skip to the next block - if (!metadata.validateKey) { return; } - - // trim the checkpoint indicator off the message if it's present - let signedMsg = (isCp) ? msgStruct[4].replace(CHECKPOINT_PATTERN, '') : msgStruct[4]; - // convert the message from a base64 string into a Uint8Array - - // FIXME this can fail and the client won't notice - signedMsg = Nacl.util.decodeBase64(signedMsg); - - // FIXME this can blow up - // TODO check that that won't cause any problems other than not being able to append... - const validateKey = Nacl.util.decodeBase64(metadata.validateKey); - // validate the message - const validated = Nacl.sign.open(signedMsg, validateKey); - if (!validated) { - // don't go any further if the message fails validation - w.abort(); - Log.info("HK_SIGNED_MESSAGE_REJECTED", 'Channel '+channel.id); - return; - } })); + }).nThen(function (w) { + // if there's no validateKey present skip to the next block + if (!metadata.validateKey) { return; } + + // trim the checkpoint indicator off the message if it's present + let signedMsg = (isCp) ? msgStruct[4].replace(CHECKPOINT_PATTERN, '') : msgStruct[4]; + // convert the message from a base64 string into a Uint8Array + + // FIXME this can fail and the client won't notice + signedMsg = Nacl.util.decodeBase64(signedMsg); + + // FIXME this can blow up + // TODO check that that won't cause any problems other than not being able to append... + const validateKey = Nacl.util.decodeBase64(metadata.validateKey); + // validate the message + const validated = Nacl.sign.open(signedMsg, validateKey); + if (!validated) { + // don't go any further if the message fails validation + w.abort(); + Log.info("HK_SIGNED_MESSAGE_REJECTED", 'Channel '+channel.id); + return; + } }).nThen(function () { // do checkpoint stuff... diff --git a/lib/load-config.js b/lib/load-config.js index 0756c2df4..4d6fa894f 100644 --- a/lib/load-config.js +++ b/lib/load-config.js @@ -1,7 +1,7 @@ /* jslint node: true */ "use strict"; var config; -var configPath = process.env.CRYPTPAD_CONFIG || "../config/config"; +var configPath = process.env.CRYPTPAD_CONFIG || "../config/config.js"; try { config = require(configPath); if (config.adminEmail === 'i.did.not.read.my.config@cryptpad.fr') { @@ -18,5 +18,29 @@ try { } config = require("../config/config.example"); } + +var isPositiveNumber = function (n) { + return (!isNaN(n) && n >= 0); +}; + +if (!isPositiveNumber(config.inactiveTime)) { + config.inactiveTime = 90; +} +if (!isPositiveNumber(config.archiveRetentionTime)) { + config.archiveRetentionTime = 90; +} +if (!isPositiveNumber(config.maxUploadSize)) { + config.maxUploadSize = 20 * 1024 * 1024; +} +if (!isPositiveNumber(config.defaultStorageLimit)) { + config.defaultStorageLimit = 50 * 1024 * 1024; +} + +// premiumUploadSize is worthless if it isn't a valid positive number +// or if it's less than the default upload size +if (!isPositiveNumber(config.premiumUploadSize) || config.premiumUploadSize < config.defaultStorageLimit) { + delete config.premiumUploadSize; +} + module.exports = config; diff --git a/lib/storage/file.js b/lib/storage/file.js index b1ac4de3d..6d2c672a5 100644 --- a/lib/storage/file.js +++ b/lib/storage/file.js @@ -10,11 +10,9 @@ var Util = require("../common-util"); var Meta = require("../metadata"); var Extras = require("../hk-util"); -const Schedule = require("../schedule"); -const Readline = require("readline"); -const ToPull = require('stream-to-pull-stream'); -const Pull = require('pull-stream'); +const readFileBin = require("../stream-file").readFileBin; +const Schedule = require("../schedule"); const isValidChannelId = function (id) { return typeof(id) === 'string' && id.length >= 32 && id.length < 50 && @@ -60,13 +58,24 @@ var channelExists = function (filepath, cb) { }); }; +// readMessagesBin asynchronously iterates over the messages in a channel log +// the handler for each message must call back to read more, which should mean +// that this function has a lower memory profile than our classic method +// of reading logs line by line. +// it also allows the handler to abort reading at any time +const readMessagesBin = (env, id, start, msgHandler, cb) => { + const stream = Fs.createReadStream(mkPath(env, id), { start: start }); + return void readFileBin(stream, msgHandler, cb); +}; + // reads classic metadata from a channel log and aborts // returns undefined if the first message was not an object (not an array) var getMetadataAtPath = function (Env, path, _cb) { - var stream; + const stream = Fs.createReadStream(path, { start: 0 }); // cb implicitly destroys the stream, if it exists // and calls back asynchronously no more than once + /* var cb = Util.once(Util.both(function () { try { stream.destroy(); @@ -74,20 +83,26 @@ var getMetadataAtPath = function (Env, path, _cb) { return err; } }, Util.mkAsync(_cb))); + */ - // stream creation emit errors... probably ENOENT - stream = Fs.createReadStream(path, { encoding: 'utf8' }).on('error', cb); - - // stream lines - const rl = Readline.createInterface({ - input: stream, + var cb = Util.once(Util.mkAsync(_cb), function () { + throw new Error("Multiple Callbacks"); }); var i = 0; - rl - .on('line', function (line) { + return readFileBin(stream, function (msgObj, readMore, abort) { + const line = msgObj.buff.toString('utf8'); + + if (!line) { + return readMore(); + } + // metadata should always be on the first line or not exist in the channel at all - if (i++ > 0) { return void cb(); } + if (i++ > 0) { + console.log("aborting"); + abort(); + return void cb(); + } var metadata; try { metadata = JSON.parse(line); @@ -102,9 +117,10 @@ var getMetadataAtPath = function (Env, path, _cb) { // if you can't parse, that's bad return void cb("INVALID_METADATA"); } - }) - .on('close', cb) - .on('error', cb); + readMore(); + }, function (err) { + cb(err); + }); }; var closeChannel = function (env, channelName, cb) { @@ -148,27 +164,16 @@ var clearChannel = function (env, channelId, _cb) { }; /* readMessages is our classic method of reading messages from the disk - notably doesn't provide a means of aborting if you finish early + notably doesn't provide a means of aborting if you finish early. + Internally it uses readFileBin: to avoid duplicating code and to use less memory */ -var readMessages = function (path, msgHandler, cb) { - var remainder = ''; - var stream = Fs.createReadStream(path, { encoding: 'utf8' }); - var complete = function (err) { - var _cb = cb; - cb = undefined; - if (_cb) { _cb(err); } - }; - stream.on('data', function (chunk) { - var lines = chunk.split('\n'); - lines[0] = remainder + lines[0]; - remainder = lines.pop(); - lines.forEach(msgHandler); - }); - stream.on('end', function () { - msgHandler(remainder); - complete(); - }); - stream.on('error', function (e) { complete(e); }); +var readMessages = function (path, msgHandler, _cb) { + var stream = Fs.createReadStream(path, { start: 0}); + var cb = Util.once(Util.mkAsync(_cb)); + return readFileBin(stream, function (msgObj, readMore) { + msgHandler(msgObj.buff.toString('utf8')); + readMore(); + }, cb); }; /* getChannelMetadata @@ -186,22 +191,21 @@ var getChannelMetadata = function (Env, channelId, cb) { // low level method for getting just the dedicated metadata channel var getDedicatedMetadata = function (env, channelId, handler, cb) { var metadataPath = mkMetadataPath(env, channelId); - readMessages(metadataPath, function (line) { - if (!line) { return; } + var stream = Fs.createReadStream(metadataPath, {start: 0}); + readFileBin(stream, function (msgObj, readMore) { + var line = msgObj.buff.toString('utf8'); try { var parsed = JSON.parse(line); handler(null, parsed); - } catch (e) { - handler(e, line); + } catch (err) { + handler(err, line); } + readMore(); }, function (err) { - if (err) { - // ENOENT => there is no metadata log - if (err.code === 'ENOENT') { return void cb(); } - // otherwise stream errors? - return void cb(err); - } - cb(); + // ENOENT => there is no metadata log + if (!err || err.code === 'ENOENT') { return void cb(); } + // otherwise stream errors? + cb(err); }); }; @@ -266,75 +270,6 @@ var writeMetadata = function (env, channelId, data, cb) { }; -// transform a stream of arbitrarily divided data -// into a stream of buffers divided by newlines in the source stream -// TODO see if we could improve performance by using libnewline -const NEWLINE_CHR = ('\n').charCodeAt(0); -const mkBufferSplit = () => { - let remainder = null; - return Pull((read) => { - return (abort, cb) => { - read(abort, function (end, data) { - if (end) { - if (data) { console.log("mkBufferSplit() Data at the end"); } - cb(end, remainder ? [remainder, data] : [data]); - remainder = null; - return; - } - const queue = []; - for (;;) { - const offset = data.indexOf(NEWLINE_CHR); - if (offset < 0) { - remainder = remainder ? Buffer.concat([remainder, data]) : data; - break; - } - let subArray = data.slice(0, offset); - if (remainder) { - subArray = Buffer.concat([remainder, subArray]); - remainder = null; - } - queue.push(subArray); - data = data.slice(offset + 1); - } - cb(end, queue); - }); - }; - }, Pull.flatten()); -}; - -// return a streaming function which transforms buffers into objects -// containing the buffer and the offset from the start of the stream -const mkOffsetCounter = () => { - let offset = 0; - return Pull.map((buff) => { - const out = { offset: offset, buff: buff }; - // +1 for the eaten newline - offset += buff.length + 1; - return out; - }); -}; - -// readMessagesBin asynchronously iterates over the messages in a channel log -// the handler for each message must call back to read more, which should mean -// that this function has a lower memory profile than our classic method -// of reading logs line by line. -// it also allows the handler to abort reading at any time -const readMessagesBin = (env, id, start, msgHandler, cb) => { - const stream = Fs.createReadStream(mkPath(env, id), { start: start }); - let keepReading = true; - Pull( - ToPull.read(stream), - mkBufferSplit(), - mkOffsetCounter(), - Pull.asyncMap((data, moreCb) => { - msgHandler(data, moreCb, () => { keepReading = false; moreCb(); }); - }), - Pull.drain(() => (keepReading), (err) => { - cb((keepReading) ? err : undefined); - }) - ); -}; - // check if a file exists at $path var checkPath = function (path, callback) { Fs.stat(path, function (err) { @@ -428,6 +363,7 @@ var removeArchivedChannel = function (env, channelName, cb) { }); }; +// TODO use ../plan.js for a smaller memory footprint var listChannels = function (root, handler, cb) { // do twenty things at a time var sema = Semaphore.create(20); @@ -843,6 +779,7 @@ var message = function (env, chanName, msg, cb) { }; // stream messages from a channel log +// TODO replace getMessages with readFileBin var getMessages = function (env, chanName, handler, cb) { getChannel(env, chanName, function (err, chan) { if (!chan) { diff --git a/lib/stream-file.js b/lib/stream-file.js new file mode 100644 index 000000000..dc44aaf50 --- /dev/null +++ b/lib/stream-file.js @@ -0,0 +1,76 @@ +/* jshint esversion: 6 */ +/* global Buffer */ + +const ToPull = require('stream-to-pull-stream'); +const Pull = require('pull-stream'); + +const Stream = module.exports; + +// transform a stream of arbitrarily divided data +// into a stream of buffers divided by newlines in the source stream +// TODO see if we could improve performance by using libnewline +const NEWLINE_CHR = ('\n').charCodeAt(0); +const mkBufferSplit = () => { + let remainder = null; + return Pull((read) => { + return (abort, cb) => { + read(abort, function (end, data) { + if (end) { + if (data) { console.log("mkBufferSplit() Data at the end"); } + cb(end, remainder ? [remainder, data] : [data]); + remainder = null; + return; + } + const queue = []; + for (;;) { + const offset = data.indexOf(NEWLINE_CHR); + if (offset < 0) { + remainder = remainder ? Buffer.concat([remainder, data]) : data; + break; + } + let subArray = data.slice(0, offset); + if (remainder) { + subArray = Buffer.concat([remainder, subArray]); + remainder = null; + } + queue.push(subArray); + data = data.slice(offset + 1); + } + cb(end, queue); + }); + }; + }, Pull.flatten()); +}; + +// return a streaming function which transforms buffers into objects +// containing the buffer and the offset from the start of the stream +const mkOffsetCounter = () => { + let offset = 0; + return Pull.map((buff) => { + const out = { offset: offset, buff: buff }; + // +1 for the eaten newline + offset += buff.length + 1; + return out; + }); +}; + +// readMessagesBin asynchronously iterates over the messages in a channel log +// the handler for each message must call back to read more, which should mean +// that this function has a lower memory profile than our classic method +// of reading logs line by line. +// it also allows the handler to abort reading at any time +Stream.readFileBin = (stream, msgHandler, cb) => { + //const stream = Fs.createReadStream(path, { start: start }); + let keepReading = true; + Pull( + ToPull.read(stream), + mkBufferSplit(), + mkOffsetCounter(), + Pull.asyncMap((data, moreCb) => { + msgHandler(data, moreCb, () => { keepReading = false; moreCb(); }); + }), + Pull.drain(() => (keepReading), (err) => { + cb((keepReading) ? err : undefined); + }) + ); +}; diff --git a/package-lock.json b/package-lock.json index 770871394..a43ad28a5 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,6 +1,6 @@ { "name": "cryptpad", - "version": "3.12.0", + "version": "3.13.0", "lockfileVersion": 1, "requires": true, "dependencies": { @@ -113,7 +113,9 @@ } }, "chainpad-server": { - "version": "4.0.3", + "version": "4.0.4", + "resolved": "https://registry.npmjs.org/chainpad-server/-/chainpad-server-4.0.4.tgz", + "integrity": "sha512-ApRHFmq+tL2hvQzWT811YRgZdKLfU7in/OgEECy/gxk2hi1FlUlPsEVcRe/b6LzvU9vT1CHlFeWaCNpIZe9oSw==", "requires": { "nthen": "0.1.8", "pull-stream": "^3.6.9", diff --git a/package.json b/package.json index 418637ffd..cd68d8e0c 100644 --- a/package.json +++ b/package.json @@ -1,7 +1,7 @@ { "name": "cryptpad", "description": "realtime collaborative visual editor with zero knowlege server", - "version": "3.12.0", + "version": "3.13.0", "license": "AGPL-3.0+", "repository": { "type": "git", @@ -13,7 +13,7 @@ }, "dependencies": { "chainpad-crypto": "^0.2.2", - "chainpad-server": "^4.0.3", + "chainpad-server": "^4.0.4", "express": "~4.16.0", "fs-extra": "^7.0.0", "get-folder-size": "^2.0.1", diff --git a/scripts/tests/test-rpc.js b/scripts/tests/test-rpc.js index e944a498b..91a254df3 100644 --- a/scripts/tests/test-rpc.js +++ b/scripts/tests/test-rpc.js @@ -373,11 +373,24 @@ nThen(function (w) { } })); }).nThen(function (w) { - // XXX RESTRICT GET_METADATA should fail because alice is not on the allow list - // expect INSUFFICIENT_PERMISSIONS - alice.anonRpc.send('GET_METADATA', oscar.mailboxChannel, w(function (err) { - if (!err) { - // XXX RESTRICT alice should not be permitted to read oscar's mailbox's metadata + alice.anonRpc.send('GET_METADATA', oscar.mailboxChannel, w(function (err, response) { + if (!response) { throw new Error("EXPECTED RESPONSE"); } + var metadata = response[0]; + var expected_fields = ['restricted', 'allowed']; + for (var key in metadata) { + if (expected_fields.indexOf(key) === -1) { + console.log(metadata); + throw new Error("EXPECTED METADATA TO BE RESTRICTED"); + } + } + })); +}).nThen(function (w) { + alice.anonRpc.send('WRITE_PRIVATE_MESSAGE', [ + oscar.mailboxChannel, + '["VANDALISM"]', + ], w(function (err) { + if (err !== 'INSUFFICIENT_PERMISSIONS') { + throw new Error("EXPECTED INSUFFICIENT PERMISSIONS ERROR"); } })); }).nThen(function (w) { @@ -388,11 +401,12 @@ nThen(function (w) { value: [ alice.edKeys.edPublic ] - }, w(function (err /*, metadata */) { - if (err) { - return void console.error(err); + }, w(function (err, response) { + var metadata = response && response[0]; + if (!metadata || !Array.isArray(metadata.allowed) || + metadata.allowed.indexOf(alice.edKeys.edPublic) === -1) { + throw new Error("EXPECTED ALICE TO BE IN THE ALLOW LIST"); } - //console.log('XXX', metadata); })); }).nThen(function (w) { oscar.anonRpc.send('GET_METADATA', oscar.mailboxChannel, w(function (err, response) { @@ -410,14 +424,12 @@ nThen(function (w) { } })); }).nThen(function () { - // XXX RESTRICT alice should now be able to read oscar's mailbox metadata -/* alice.anonRpc.send('GET_METADATA', oscar.mailboxChannel, function (err, response) { - if (err) { - PROBLEM + var metadata = response && response[0]; + if (!metadata || !metadata.restricted || !metadata.channel) { + throw new Error("EXPECTED FULL ACCESS TO CHANNEL METADATA"); } }); -*/ }).nThen(function (w) { //throw new Error("boop"); // add alice as an owner of oscar's mailbox for some reason diff --git a/server.js b/server.js index ddf9fc8b0..7b0e93687 100644 --- a/server.js +++ b/server.js @@ -8,6 +8,7 @@ var Package = require('./package.json'); var Path = require("path"); var nThen = require("nthen"); var Util = require("./lib/common-util"); +var Default = require("./lib/defaults"); var config = require("./lib/load-config"); @@ -35,6 +36,47 @@ if (process.env.PACKAGE) { FRESH_KEY = +new Date(); } +(function () { + // you absolutely must provide an 'httpUnsafeOrigin' + if (typeof(config.httpUnsafeOrigin) !== 'string') { + throw new Error("No 'httpUnsafeOrigin' provided"); + } + + config.httpUnsafeOrigin = config.httpUnsafeOrigin.trim(); + + // fall back to listening on a local address + // if httpAddress is not a string + if (typeof(config.httpAddress) !== 'string') { + config.httpAddress = '127.0.0.1'; + } + + // listen on port 3000 if a valid port number was not provided + if (typeof(config.httpPort) !== 'number' || config.httpPort > 65535) { + config.httpPort = 3000; + } + + if (typeof(httpSafeOrigin) !== 'string') { + if (typeof(config.httpSafePort) !== 'number') { + config.httpSafePort = config.httpPort + 1; + } + + if (DEV_MODE) { return; } + console.log(` + m m mm mmmmm mm m mmmmm mm m mmm m + # # # ## # "# #"m # # #"m # m" " # + " #"# # # # #mmmm" # #m # # # #m # # mm # + ## ##" #mm# # "m # # # # # # # # # + # # # # # " # ## mm#mm # ## "mmm" # +`); + + console.log("\nNo 'httpSafeOrigin' provided."); + console.log("Your configuration probably isn't taking advantage of all of CryptPad's security features!"); + console.log("This is acceptable for development, otherwise your users may be at risk.\n"); + + console.log("Serving sandboxed content via port %s.\nThis is probably not what you want for a production instance!\n", config.httpSafePort); + } +}()); + var configCache = {}; config.flushCache = function () { configCache = {}; @@ -47,11 +89,21 @@ config.flushCache = function () { const clone = (x) => (JSON.parse(JSON.stringify(x))); var setHeaders = (function () { - if (typeof(config.httpHeaders) !== 'object') { return function () {}; } + // load the default http headers unless the admin has provided their own via the config file + var headers; - const headers = clone(config.httpHeaders); - if (config.contentSecurity) { - headers['Content-Security-Policy'] = clone(config.contentSecurity); + var custom = config.httpHeaders; + // if the admin provided valid http headers then use them + if (custom && typeof(custom) === 'object' && !Array.isArray(custom)) { + headers = clone(custom); + } else { + // otherwise use the default + headers = Default.httpHeaders(); + } + + // next define the base Content Security Policy (CSP) headers + if (typeof(config.contentSecurity) === 'string') { + headers['Content-Security-Policy'] = config.contentSecurity; if (!/;$/.test(headers['Content-Security-Policy'])) { headers['Content-Security-Policy'] += ';' } if (headers['Content-Security-Policy'].indexOf('frame-ancestors') === -1) { // backward compat for those who do not merge the new version of the config @@ -59,10 +111,16 @@ var setHeaders = (function () { // It also fixes the cross-domain iframe. headers['Content-Security-Policy'] += "frame-ancestors *;"; } + } else { + // use the default CSP headers constructed with your domain + headers['Content-Security-Policy'] = Default.contentSecurity(config.httpUnsafeOrigin); } + const padHeaders = clone(headers); - if (config.padContentSecurity) { - padHeaders['Content-Security-Policy'] = clone(config.padContentSecurity); + if (typeof(config.padContentSecurity) === 'string') { + padHeaders['Content-Security-Policy'] = config.padContentSecurity; + } else { + padHeaders['Content-Security-Policy'] = Default.padContentSecurity(config.httpUnsafeOrigin); } if (Object.keys(headers).length) { return function (req, res) { @@ -116,7 +174,7 @@ app.use(Express.static(__dirname + '/www')); // FIXME I think this is a regression caused by a recent PR // correct this hack without breaking the contributor's intended behaviour. -var mainPages = config.mainPages || ['index', 'privacy', 'terms', 'about', 'contact']; +var mainPages = config.mainPages || Default.mainPages(); var mainPagePattern = new RegExp('^\/(' + mainPages.join('|') + ').html$'); app.get(mainPagePattern, Express.static(__dirname + '/customize')); app.get(mainPagePattern, Express.static(__dirname + '/customize.dist')); @@ -163,11 +221,13 @@ var serveConfig = (function () { removeDonateButton: (config.removeDonateButton === true), allowSubscriptions: (config.allowSubscriptions === true), websocketPath: config.externalWebsocketURL, - httpUnsafeOrigin: config.httpUnsafeOrigin.replace(/^\s*/, ''), + httpUnsafeOrigin: config.httpUnsafeOrigin, adminEmail: config.adminEmail, adminKeys: admins, inactiveTime: config.inactiveTime, - supportMailbox: config.supportMailboxPublicKey + supportMailbox: config.supportMailboxPublicKey, + maxUploadSize: config.maxUploadSize, + premiumUploadSize: config.premiumUploadSize, }, null, '\t'), 'obj.httpSafeOrigin = ' + (function () { if (config.httpSafeOrigin) { return '"' + config.httpSafeOrigin + '"'; } diff --git a/www/common/common-interface.js b/www/common/common-interface.js index 46e896d32..02ad7b4bc 100644 --- a/www/common/common-interface.js +++ b/www/common/common-interface.js @@ -1194,21 +1194,28 @@ define([ var $spinner = $('', {'class': 'fa fa-spinner fa-pulse'}).hide(); var state = false; + var to; var spin = function () { + clearTimeout(to); state = true; $ok.hide(); $spinner.show(); }; var hide = function () { + clearTimeout(to); state = false; $ok.hide(); $spinner.hide(); }; var done = function () { + clearTimeout(to); state = false; $ok.show(); $spinner.hide(); + to = setTimeout(function () { + $ok.hide(); + }, 500); }; if ($container && $container.append) { diff --git a/www/common/common-ui-elements.js b/www/common/common-ui-elements.js index 4f6e73ba0..f407826e9 100644 --- a/www/common/common-ui-elements.js +++ b/www/common/common-ui-elements.js @@ -953,7 +953,6 @@ define([ 'data-curve': data.curvePublic || '', 'data-name': name.toLowerCase(), 'data-order': i, - title: name, style: 'order:'+i+';' },[ avatar, @@ -2426,9 +2425,9 @@ define([ case 'access': button = $('