Merge branch 'staging' of github.com:xwiki-labs/cryptpad into staging

pull/1/head
yflory 5 years ago
commit a6a40c3f6a

@ -1,3 +1,96 @@
# Aurochs release (v3.0.0)
The move to 3.0 is mostly because we ran out of letters in the alphabet for our 2.0 release cycle.
Releases in this cycle will be named according to a theme of "extinct animals", a list which is unfortunately getting longer all the time.
## Goals
In this release, we took more time than usual to make some big changes to the way the platform works, taking great care to maintain or improve stability.
Up until now it has been necessary to create documents with the whatever settings they might require in the future, after which point it was not possible to change them. This release introduces the ability of the server to store and read amendments to document metadata. This will soon allow users of owned documents to delegate that ownership to their friends, add or modify expiration times, and make other modifications that will greatly improve their control over their data.
## Update notes
During this development period we performed an extensive audit of our existing features and discovered a few potential security issues which we've addressed. We plan to announce the details of these flaws once administrators have had sufficient time to update their instances. If you are running a CryptPad instance, we advise you to update to 3.0.0 at your earliest opportunity.
* It was brought to our attention that while expired pads were not being served beyond their expiration time, they were not being removed as intended. The cause was due to our failure to document a configuration point (`enableTaskScheduling`) that was added to make expiration optional in the example configuration file. We've removed this configuration point so that tasks like expiration will always be scheduled. Expiration of tasks was already integrated into the main server process, but we have added a new configuration point to the server in case any administrators would like to run the expiration tasks in a dedicated process for performance reasons. To disable the integration, change `disableIntegratedTasks` from `false` to `true` in the server configuration file.
* This release depends on updates to three clientside libraries (`netflux-websocket@0.1.20`, `chainpad-netflux@0.9.0`, and `chainpad-listmap@0.7.0`). These changes are **not compatible with older versions of the server**. To update:
1. make any configuration changes you want
2. take down your server process
3. fetch the latest clientside and serverside code via git
4. run `bower update` and `npm install` to ensure you have the latest dependencies
5. update your cache-busting string if you've configured your instance to update this manually
6. bring your server back up
## Features
* Support panel
* Support tickets now include the "user agent" string of the user's browser to make it easier to debug issues.
* Users that submitted support tickets will now receive notifications when their tickets are answered
* Sharing and access control
* the "pad properties modal" now displays the name of the owner of a pad if you recognize their public key
* this will be improved further in future releases as we introduce the notion of "acquantances" as users who you have seen in the past but who are not yet your friends
* newly created "owned pads" will now contain an "owner" field containing the address of your "mailbox", encrypted with the same key as the pad itself
* this allows users with view-only access rights to send you a message to request edit rights
* the same functionality is offered for older pads if you happen to know the mailbox address for an owner listed in the "owners" field
* it was already possible to delegate access to a friend via the "share modal", but we now support a special message type for templates so that the pad will be stored as a template in the receiving user's drive (if accepted)
* the "availability" tab of the "properties" modal for any particalar pad now shows the display name of the pad's owner if they are your friend. Additionally we now support displaying multiple owners rather than just "yourself" or "somebody else"
* File and CryptDrive workflows
* we now support folder upload in any browser offering the required APIs
* it's now possible to export files and folders (as zips) directly from your CryptDrive
* the ctrl-e and right-click menus in the drive now features an entry for uploading files and folders
* certain plain-text file formats uploaded as static files can now be rendered within other documents or used as the basis of a new code pad
* ~~regular folders in your CryptDrive can be converted into shared folders from the right-click menu as long as they do not contain shared folders and are not within another shared folder~~
* nesting is complicated for a variety of technical reasons, but we're discussing whether it's worthwhile to try to find a solution
* we found a critical bug in the implementation of this feature and disabled it for this release
* documents and folders within your CryptDrive can now be moved to parent folders by dropping them on the file path in the toolbar
* Styles
* the upload/download progress table has been restyled to be less invasive
* right-click menus throughout the platform now feature icons for each entry in addition to text
* the animation on the spinner on the loading page has been updated:
* it no longer oscillates
* it doesn't display a 'box' while the icon font is loading
* it's more dynamic and stylish (depending on your tastes)
* We've renamed the "features" page "pricing" after many prospective users reported that is was difficult to find details about premium accounts
* Code editor updates
* you can now un-indent code blocks with shift-tab while on a line or selecting multiple lines of text
* backspace now removes the configured level of indentation
* titles which are inferred from document content now ignore any html you might have included in your markdown
## Bug fixes
* One of our users registered `CVE-2019-15302` for a bug they discovered
* users with edit access for rich text pads could change the URL of the document to load the same document in a code pad
* doing so invalidated the existing stored content, making it impossible to load the same document in the rich text editor
* doing the same steps now displays an error and does not modify the existing document
* UI and responsiveness
* submenus in contextmenus can now be opened on mobile devices
* the CryptDrive layout mode is now detected dynamically instead of at page load
* contextmenus shouldn't get rendered off the page anymore
* a non-functional ctrl-e menu could be loaded when another modal is already open, but now it is simply blocked
* icons with thumbnails in the drive no longer flicker when the page is redrawn
* the color picker in the settings page which chooses your cursor color now uses the same cross-platform library used in other applications (jsColor) so that it will work in all modern browsers
* when prompted to save a pad to your CryptDrive is was possible to click multiple times, displaying multiple confirmation messages when the pad was finally stored. We now ignore successive clicks until the first request fails or is successful
* chat messages now only render a subset of the markdown implemented elsewhere on the platform
* your most recently used access-right settings are remembered when you delegate access directly to a friend, while previously the settings were only remembered when the other sharing methods were used
* Code editor bugs
* indentation settings modified on the settings page are updated in real time, as intended
* we discovered that when changes made by remote editors were applied to the document when the window was not focused, the user's cursor position would not be preserved. This has been fixed
* when importing code without file extensions (.bashrc, .viminfo) the file name itself was used as an extension while the name was considered empty. These file names and extensions are now parsed correctly
* language modes in the code editor are now exported with their respective file extensions
* file extensions are reapplied when importing files
* CryptDrive
* we offer a "debug" app which is not advertised anywhere in the UI which can be used to investigate strange behaviour in documents
* if the app is loaded without a hash, the hash for the user's drive is used instead
* we no longer add this document as an entry in your CryptDrive
* we guard against deleting the history of your CryptDrive if you already have such a file and you delete it permanently or move it to your trash
* we've fixed a number of bugs related to viewing and restoring invalid states from your CryptDrive's history
* Connectivity
* we've fixed a bug that caused disconnection from the server to go undetected for 30 seconds
* we discovered that leaving rejoining a real-time session would cause the reactivation of existing listeners for that session as well as the addition of a new set of handlers. We now remove the old listeners when leaving a session, preventing a memory leak and avoiding the repeated application of incoming messages
* when we leave a session we also make sure to clean up residual data structures from the consensus engine, saving memory
* we found that support tickets on the admin page were displayed twice when the admin disconnected and reconnected while the support ticket panel was open. This has been fixed
# Zebra release (v2.25.0)
## Goals

@ -8,6 +8,12 @@ define([
Cred.MINIMUM_PASSWORD_LENGTH = typeof(AppConfig.minimumPasswordLength) === 'number'?
AppConfig.minimumPasswordLength: 8;
// https://stackoverflow.com/questions/46155/how-to-validate-an-email-address-in-javascript
Cred.isEmail = function (email) {
var re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
return re.test(String(email).toLowerCase());
};
Cred.isLongEnoughPassword = function (passwd) {
return passwd.length >= Cred.MINIMUM_PASSWORD_LENGTH;
};

@ -103,7 +103,7 @@ define([
])*/
])
]),
h('div.cp-version-footer', "CryptPad v3.0.0 (Aurochs)")
h('div.cp-version-footer', "CryptPad v3.0.1 (Aurochs' revenge)")
]);
};

@ -1,7 +1,7 @@
{
"name": "cryptpad",
"description": "realtime collaborative visual editor with zero knowlege server",
"version": "3.0.0",
"version": "3.0.1",
"license": "AGPL-3.0+",
"repository": {
"type": "git",
@ -32,6 +32,7 @@
"start": "node server.js",
"dev": "DEV=1 node server.js",
"fresh": "FRESH=1 node server.js",
"package": "PACKAGE=1 node server.js",
"lint": "jshint --config .jshintrc --exclude-path .jshintignore . && ./node_modules/lesshint/bin/lesshint -c ./.lesshintrc ./customize.dist/src/less2/",
"lint:js": "jshint --config .jshintrc --exclude-path .jshintignore .",
"lint:less": "./node_modules/lesshint/bin/lesshint -c ./.lesshintrc ./customize.dist/src/less2/",

@ -22,7 +22,7 @@ The most recent version and all past release notes can be found [here](https://g
## Setup using Docker
See [Cryptpad-Docker](docs/cryptpad-docker.md) and the community wiki's [Docker](https://github.com/xwiki-labs/cryptpad/wiki/Docker-(with-Nginx-and-Traefik)) page for details on how to get up-and-running with Cryptpad in Docker.
See [Cryptpad-Docker](docs/cryptpad-docker.md) and the community wiki's [Docker](https://github.com/xwiki-labs/cryptpad/wiki/Docker) page for details on how to get up-and-running with Cryptpad in Docker.
## Setup using Ansible

@ -734,7 +734,7 @@ var pinChannel = function (Env, publicKey, channels, cb) {
}
if (pinSize > free) { return void cb('E_OVER_LIMIT'); }
Env.pinStore.message(publicKey, JSON.stringify(['PIN', toStore]),
Env.pinStore.message(publicKey, JSON.stringify(['PIN', toStore, +new Date()]),
function (e) {
if (e) { return void cb(e); }
toStore.forEach(function (channel) {
@ -766,7 +766,7 @@ var unpinChannel = function (Env, publicKey, channels, cb) {
return void getHash(Env, publicKey, cb);
}
Env.pinStore.message(publicKey, JSON.stringify(['UNPIN', toStore]),
Env.pinStore.message(publicKey, JSON.stringify(['UNPIN', toStore, +new Date()]),
function (e) {
if (e) { return void cb(e); }
toStore.forEach(function (channel) {
@ -810,7 +810,7 @@ var resetUserPins = function (Env, publicKey, channelList, cb) {
They will not be able to pin additional pads until they upgrade
or delete enough files to go back under their limit. */
if (pinSize > limit[0] && session.hasPinned) { return void(cb('E_OVER_LIMIT')); }
Env.pinStore.message(publicKey, JSON.stringify(['RESET', channelList]),
Env.pinStore.message(publicKey, JSON.stringify(['RESET', channelList, +new Date()]),
function (e) {
if (e) { return void cb(e); }
channelList.forEach(function (channel) {

@ -1,47 +0,0 @@
/* jshint esversion: 6, node: true */
const Fs = require("fs");
const nThen = require("nthen");
const Saferphore = require("saferphore");
const PinnedData = require('./pinneddata');
const config = require("../lib/load-config");
if (!config.inactiveTime || typeof(config.inactiveTime) !== "number") { return; }
/* Instead of this script you should probably use
evict-inactive.js which moves things to an archive directory
in case the data that would have been deleted turns out to be important.
it also handles removing that archived data after a set period of time
it only works for channels at the moment, though, and nothing else.
*/
let inactiveTime = +new Date() - (config.inactiveTime * 24 * 3600 * 1000);
let inactiveConfig = {
unpinned: true,
olderthan: inactiveTime,
blobsolderthan: inactiveTime,
filePath: config.filePath,
blobPath: config.blobPath,
pinPath: config.pinPath,
};
let toDelete;
nThen(function (waitFor) {
PinnedData.load(inactiveConfig, waitFor(function (err, data) {
if (err) {
waitFor.abort();
throw new Error(err);
}
toDelete = data;
}));
}).nThen(function () {
var sem = Saferphore.create(10);
toDelete.forEach(function (f) {
sem.take(function (give) {
Fs.unlink(f.filename, give(function (err) {
if (err) { return void console.error(err + " " + f.filename); }
console.log(f.filename + " " + f.size + " " + (+f.mtime) + " " + (+new Date()));
}));
});
});
});

@ -1,267 +0,0 @@
/* jshint esversion: 6, node: true */
const Fs = require('fs');
const Semaphore = require('saferphore');
const nThen = require('nthen');
const Path = require('path');
const Pins = require('../lib/pins');
/*
takes an array of pinned file names
and a global map of stats indexed by public keys
returns the sum of the size of those pinned files
*/
const sizeForHashes = (hashes, dsFileStats) => {
let sum = 0;
hashes.forEach((h) => {
const s = dsFileStats[h];
if (typeof(s) !== 'object' || typeof(s.size) !== 'number') {
//console.log('missing ' + h + ' ' + typeof(s));
} else {
sum += s.size;
}
});
return sum;
};
// do twenty things at a time
const sema = Semaphore.create(20);
let dirList;
const fileList = []; // array which we reuse for a lot of things
const dsFileStats = {}; // map of stats
const out = []; // what we return at the end
const pinned = {}; // map of pinned files
// define a function: 'load' which takes a config
// and a callback
module.exports.load = function (config, cb) {
var filePath = config.filePath || './datastore';
var blobPath = config.blobPath || './blob';
var pinPath = config.pinPath || './pins';
nThen((waitFor) => {
// read the subdirectories in the datastore
Fs.readdir(filePath, waitFor((err, list) => {
if (err) { throw err; }
dirList = list;
}));
}).nThen((waitFor) => {
// iterate over all subdirectories
dirList.forEach((f) => {
// process twenty subdirectories simultaneously
sema.take((returnAfter) => {
// get the list of files in every subdirectory
// and push them to 'fileList'
Fs.readdir(Path.join(filePath, f), waitFor(returnAfter((err, list2) => {
if (err) { throw err; }
list2.forEach((ff) => { fileList.push(Path.join(filePath, f, ff)); });
})));
});
});
}).nThen((waitFor) => {
// read the subdirectories in 'blob'
Fs.readdir(blobPath, waitFor((err, list) => {
if (err) { throw err; }
// overwrite dirList
dirList = list;
}));
}).nThen((waitFor) => {
// iterate over all subdirectories
dirList.forEach((f) => {
// process twenty subdirectories simultaneously
sema.take((returnAfter) => {
// get the list of files in every subdirectory
// and push them to 'fileList'
Fs.readdir(Path.join(blobPath, f), waitFor(returnAfter((err, list2) => {
if (err) { throw err; }
list2.forEach((ff) => { fileList.push(Path.join(blobPath, f, ff)); });
})));
});
});
}).nThen((waitFor) => {
// iterate over the fileList
fileList.forEach((f) => {
// process twenty files simultaneously
sema.take((returnAfter) => {
// get the stats of each files
Fs.stat(f, waitFor(returnAfter((err, st) => {
if (err) { throw err; }
st.filename = f;
// push them to a big map of stats
dsFileStats[f.replace(/^.*\/([^\/\.]*)(\.ndjson)?$/, (all, a) => (a))] = st;
})));
});
});
}).nThen((waitFor) => {
// read the subdirectories in the pinstore
Fs.readdir(pinPath, waitFor((err, list) => {
if (err) { throw err; }
dirList = list;
}));
}).nThen((waitFor) => {
// set file list to an empty array
// fileList = [] ??
fileList.splice(0, fileList.length);
dirList.forEach((f) => {
// process twenty directories at a time
sema.take((returnAfter) => {
// get the list of files in every subdirectory
// and push them to 'fileList' (which is empty because we keep reusing it)
Fs.readdir(Path.join(pinPath, f), waitFor(returnAfter((err, list2) => {
if (err) { throw err; }
list2.forEach((ff) => { fileList.push(Path.join(pinPath, f, ff)); });
})));
});
});
}).nThen((waitFor) => {
// iterate over the list of pin logs
fileList.forEach((f) => {
// twenty at a time
sema.take((returnAfter) => {
// read the full content
Fs.readFile(f, waitFor(returnAfter((err, content) => {
if (err) { throw err; }
// get the list of channels pinned by this log
const hashes = Pins.calculateFromLog(content.toString('utf8'), f);
if (config.unpinned) {
hashes.forEach((x) => { pinned[x] = 1; });
} else {
// get the size of files pinned by this log
// but only if we're gonna use it
let size = sizeForHashes(hashes, dsFileStats);
// we will return a list of values
// [user_public_key, size_of_files_they_have_pinned]
out.push([f, Math.floor(size / (1024 * 1024))]);
}
})));
});
});
}).nThen(() => {
// handle all the information you've processed so far
if (config.unpinned) {
// the user wants data about what has not been pinned
// by default we concern ourselves with pads and files older than infinity (everything)
let before = Infinity;
// but you can override this with config
if (config.olderthan) {
before = config.olderthan;
// FIXME validate inputs before doing the heavy lifting
if (isNaN(before)) { // make sure the supplied value is a number
return void cb('--olderthan error [' + config.olderthan + '] not a valid date');
}
}
// you can specify a different time for blobs...
let blobsbefore = before;
if (config.blobsolderthan) {
// use the supplied date if it exists
blobsbefore = config.blobsolderthan;
if (isNaN(blobsbefore)) {
return void cb('--blobsolderthan error [' + config.blobsolderthan + '] not a valid date');
}
}
let files = [];
// iterate over all the stats that you've saved
Object.keys(dsFileStats).forEach((f) => {
// we only care about files which are not in the pin map
if (!(f in pinned)) {
// check if it's a blob or a 'pad'
const isBlob = dsFileStats[f].filename.indexOf('.ndjson') === -1;
// if the mtime is newer than the specified value for its file type, ignore this file
if ((+dsFileStats[f].mtime) >= ((isBlob) ? blobsbefore : before)) { return; }
// otherwise push it to the list of files, with its filename, size, and mtime
files.push({
filename: dsFileStats[f].filename,
size: dsFileStats[f].size,
mtime: dsFileStats[f].mtime
});
}
});
// return the list of files
cb(null, files);
} else {
// if you're not in 'unpinned' mode, sort by size (ascending)
out.sort((a,b) => (a[1] - b[1]));
// and return the sorted data
cb(null, out.slice());
}
});
};
// This script can be called directly on its own
// or required as part of another script
if (!module.parent) {
// if no parent, it is being invoked directly
let config = {}; // build the config from command line arguments...
var Config = require("../lib/load-config");
config.filePath = Config.filePath;
config.blobPath = Config.blobPath;
config.pinPath = Config.pinPath;
// --unpinned gets the list of unpinned files
// if you don't pass this, it will list the size of pinned data per user
if (process.argv.indexOf('--unpinned') > -1) { config.unpinned = true; }
// '--olderthan' must be used in conjunction with '--unpinned'
// if you pass '--olderthan' with a string date or number, it will limit
// results only to pads older than the supplied time
// it defaults to 'infinity', or no filter at all
const ot = process.argv.indexOf('--olderthan');
if (ot > -1) {
config.olderthan = Number(process.argv[ot+1]) ? new Date(Number(process.argv[ot+1]))
: new Date(process.argv[ot+1]);
}
// '--blobsolderthan' must be used in conjunction with '--unpinned'
// if you pass '--blobsolderthan with a string date or number, it will limit
// results only to blobs older than the supplied time
// it defaults to using the same value passed '--olderthan'
const bot = process.argv.indexOf('--blobsolderthan');
if (bot > -1) {
config.blobsolderthan = Number(process.argv[bot+1]) ? new Date(Number(process.argv[bot+1]))
: new Date(process.argv[bot+1]);
}
// call our big function directly
// pass our constructed configuration and a callback
module.exports.load(config, function (err, data) {
if (err) { throw new Error(err); } // throw errors
if (!Array.isArray(data)) { return; } // if the returned value is not an array, you're done
if (config.unpinned) {
// display the list of unpinned files with their size and mtime
data.forEach((f) => { console.log(f.filename + " " + f.size + " " + (+f.mtime)); });
} else {
// display the list of public keys and the size of the data they have pinned in megabytes
data.forEach((x) => { console.log(x[0] + ' ' + x[1] + ' MB'); });
}
});
}
/* Example usage of this script...
# display the list of public keys and the size of the data the have pinned in megabytes
node pinneddata.js
# display the list of unpinned pads and blobs with their size and mtime
node pinneddata.js --unpinned
# display the list of unpinned pads and blobs older than 12345 with their size and mtime
node pinneddata.js --unpinned --olderthan 12345
# display the list of unpinned pads older than 12345 and unpinned blobs older than 123
# each with their size and mtime
node pinneddata.js --unpinned --olderthan 12345 --blobsolderthan 123
*/

@ -38,22 +38,34 @@ var app = debuggable('app', Express());
var httpsOpts;
var DEV_MODE = !!process.env.DEV
if (DEV_MODE) {
console.log("DEV MODE ENABLED");
}
// mode can be FRESH (default), DEV, or PACKAGE
var FRESH_MODE = !!process.env.FRESH;
var FRESH_KEY = '';
if (FRESH_MODE) {
var FRESH_MODE = true;
var DEV_MODE = false;
if (process.env.PACKAGE) {
// `PACKAGE=1 node server` uses the version string from package.json as the cache string
console.log("PACKAGE MODE ENABLED");
FRESH_MODE = false;
DEV_MODE = false;
} else if (process.env.DEV) {
// `DEV=1 node server` will use a random cache string on every page reload
console.log("DEV MODE ENABLED");
FRESH_MODE = false;
DEV_MODE = true;
} else {
// `FRESH=1 node server` will set a random cache string when the server is launched
// and use it for the process lifetime or until it is reset from the admin panel
console.log("FRESH MODE ENABLED");
FRESH_KEY = +new Date();
}
config.flushCache = function () {
FRESH_KEY = +new Date();
if (!config.log) { return; }
config.log.info("UPDATING_FRESH_KEY", FRESH_KEY);
};
const clone = (x) => (JSON.parse(JSON.stringify(x)));
var setHeaders = (function () {
@ -205,6 +217,7 @@ app.get('/api/config', function(req, res){
httpUnsafeOrigin: config.httpUnsafeOrigin,
adminEmail: config.adminEmail,
adminKeys: admins,
inactiveTime: config.inactiveTime,
supportMailbox: config.supportMailboxPublicKey
}, null, '\t'),
'obj.httpSafeOrigin = ' + (function () {

@ -408,7 +408,6 @@ var removeArchivedChannel = function (env, channelName, cb) {
});
};
// TODO implement a method of removing metadata that doesn't have a corresponding channel
var listChannels = function (root, handler, cb) {
// do twenty things at a time
var sema = Semaphore.create(20);
@ -442,38 +441,91 @@ var listChannels = function (root, handler, cb) {
// ignore hidden files
if (/^\./.test(item)) { return; }
// ignore anything that isn't channel or metadata
if (!/^[0-9a-fA-F]{32}(\.metadata?)*\.ndjson$/.test(item)) {
return;
}
if (!/^[0-9a-fA-F]{32}(\.metadata?)*\.ndjson$/.test(item)) { return; }
var isLonelyMetadata = false;
var channelName;
var metadataName;
// if the current file is not the channel data, then it must be metadata
if (!/^[0-9a-fA-F]{32}\.ndjson$/.test(item)) {
// this will catch metadata, which we want to ignore if
// the corresponding channel is present
if (list.indexOf(item.replace(/\.metadata/, '')) !== -1) { return; }
// otherwise fall through
}
var filepath = Path.join(nestedDirPath, item);
var channel = filepath
.replace(/\.ndjson$/, '')
.replace(/\.metadata/, '')
.replace(/.*\//, '');
metadataName = item;
channelName = item.replace(/\.metadata/, '');
// if there is a corresponding channel present in the list,
// then we should stop here and handle everything when we get to the channel
if (list.indexOf(channelName) !== -1) { return; }
// otherwise set a flag indicating that we should
// handle the metadata on its own
isLonelyMetadata = true;
} else {
channelName = item;
metadataName = channelName.replace(/\.ndjson$/, '.metadata.ndjson');
}
var filePath = Path.join(nestedDirPath, channelName);
var metadataPath = Path.join(nestedDirPath, metadataName);
var channel = metadataName.replace(/\.metadata.ndjson$/, '');
if ([32, 34].indexOf(channel.length) === -1) { return; }
// otherwise throw it on the pile
sema.take(function (give) {
var next = w(give());
Fs.stat(filepath, w(function (err, stats) {
var metaStat, channelStat;
var metaErr, channelErr;
nThen(function (ww) {
// get the stats for the metadata
Fs.stat(metadataPath, ww(function (err, stats) {
if (err) {
return void handler(err);
metaErr = err;
return;
}
metaStat = stats;
}));
handler(void 0, {
channel: channel,
atime: stats.atime,
mtime: stats.mtime,
ctime: stats.ctime,
size: stats.size,
}, next);
if (isLonelyMetadata) { return; }
Fs.stat(filePath, ww(function (err, stats) {
if (err) {
channelErr = err;
return;
}
channelStat = stats;
}));
}).nThen(function () {
if (channelErr && metaErr) {
return void handler(channelErr, void 0, next);
}
var data = {
channel: channel,
};
if (metaStat && channelStat) {
// take max of times returned by either stat
data.atime = Math.max(channelStat.atime, metaStat.atime);
data.mtime = Math.max(channelStat.mtime, metaStat.mtime);
data.ctime = Math.max(channelStat.ctime, metaStat.ctime);
// return the sum of the size of the two files
data.size = channelStat.size + metaStat.size;
} else if (metaStat) {
data.atime = metaStat.atime;
data.mtime = metaStat.mtime;
data.ctime = metaStat.ctime;
data.size = metaStat.size;
} else if (channelStat) {
data.atime = channelStat.atime;
data.mtime = channelStat.mtime;
data.ctime = channelStat.ctime;
data.size = channelStat.size;
} else {
return void handler('NO_DATA', void 0, next);
}
handler(void 0, data, next);
});
});
});
})));

@ -1,6 +1,5 @@
define([
'/bower_components/chainpad-crypto/crypto.js',
'/common/curve.js',
'/common/common-hash.js',
'/common/common-util.js',
'/common/common-realtime.js',
@ -8,8 +7,10 @@ define([
'/customize/messages.js',
'/bower_components/nthen/index.js',
], function (Crypto, Curve, Hash, Util, Realtime, Constants, Messages, nThen) {
], function (Crypto, Hash, Util, Realtime, Constants, Messages, nThen) {
'use strict';
var Curve = Crypto.Curve;
var Msg = {
inputs: [],
};

@ -1,97 +0,0 @@
define([
'/bower_components/tweetnacl/nacl-fast.min.js',
], function () {
var Nacl = window.nacl;
var Curve = {};
var concatenateUint8s = function (A) {
var len = 0;
var offset = 0;
A.forEach(function (uints) {
len += uints.length || 0;
});
var c = new Uint8Array(len);
A.forEach(function (x) {
c.set(x, offset);
offset += x.length;
});
return c;
};
var encodeBase64 = Nacl.util.encodeBase64;
var decodeBase64 = Nacl.util.decodeBase64;
var decodeUTF8 = Nacl.util.decodeUTF8;
var encodeUTF8 = Nacl.util.encodeUTF8;
Curve.encrypt = function (message, secret) {
var buffer = decodeUTF8(message);
var nonce = Nacl.randomBytes(24);
var box = Nacl.box.after(buffer, nonce, secret);
return encodeBase64(nonce) + '|' + encodeBase64(box);
};
Curve.decrypt = function (packed, secret) {
var unpacked = packed.split('|');
var nonce = decodeBase64(unpacked[0]);
var box = decodeBase64(unpacked[1]);
var message = Nacl.box.open.after(box, nonce, secret);
if (message === false) { return null; }
return encodeUTF8(message);
};
Curve.signAndEncrypt = function (msg, cryptKey, signKey) {
var packed = Curve.encrypt(msg, cryptKey);
return encodeBase64(Nacl.sign(decodeUTF8(packed), signKey));
};
Curve.openSigned = function (msg, cryptKey /*, validateKey STUBBED*/) {
var content = decodeBase64(msg).subarray(64);
return Curve.decrypt(encodeUTF8(content), cryptKey);
};
Curve.deriveKeys = function (theirs, mine) {
try {
var pub = decodeBase64(theirs);
var secret = decodeBase64(mine);
var sharedSecret = Nacl.box.before(pub, secret);
var salt = decodeUTF8('CryptPad.signingKeyGenerationSalt');
// 64 uint8s
var hash = Nacl.hash(concatenateUint8s([salt, sharedSecret]));
var signKp = Nacl.sign.keyPair.fromSeed(hash.subarray(0, 32));
var cryptKey = hash.subarray(32, 64);
return {
cryptKey: encodeBase64(cryptKey),
signKey: encodeBase64(signKp.secretKey),
validateKey: encodeBase64(signKp.publicKey)
};
} catch (e) {
console.error('invalid keys or other problem deriving keys');
console.error(e);
return null;
}
};
Curve.createEncryptor = function (keys) {
if (!keys || typeof(keys) !== 'object') {
return void console.error("invalid input for createEncryptor");
}
var cryptKey = decodeBase64(keys.cryptKey);
var signKey = decodeBase64(keys.signKey);
var validateKey = decodeBase64(keys.validateKey);
return {
encrypt: function (msg) {
return Curve.signAndEncrypt(msg, cryptKey, signKey);
},
decrypt: function (packed) {
return Curve.openSigned(packed, cryptKey, validateKey);
}
};
};
return Curve;
});

@ -253,7 +253,7 @@ define([
return void cb({error: 'User drive removal blocked!'});
}
store.rpc.removeOwnedChannel(data, function (err) {
store.rpc.removeOwnedChannel(channel, function (err) {
cb({error:err});
});
};

@ -91,9 +91,9 @@ define([
var hk = network.historyKeeper;
var cfg = {
validateKey: obj.validateKey,
metadata: {
lastKnownHash: chan.lastKnownHash || chan.lastCpHash,
metadata: {
validateKey: obj.validateKey,
owners: obj.owners,
expire: obj.expire
}

@ -464,7 +464,10 @@ define([
// convert a folder to a Shared Folder
var _convertFolderToSharedFolder = function (Env, data, cb) {
var path = data.path;
return void cb({
error: 'DISABLED'
}); // XXX CONVERT
/*var path = data.path;
var folderElement = Env.user.userObject.find(path);
// don't try to convert top-level elements (trash, root, etc) to shared-folders
// TODO also validate that you're in root (not templates, etc)
@ -554,7 +557,7 @@ define([
Env.user.userObject.delete([path], function () {
cb();
});
});
});*/
};
// Delete permanently some pads or folders

@ -399,17 +399,6 @@ define([
"Shift-Tab": function () {
editor.execCommand("indentLess");
},
"Backspace": function () {
var cursor = doc.getCursor();
var line = doc.getLine(cursor.line);
var beforeCursor = line.substring(0, cursor.ch);
if (beforeCursor && beforeCursor.trim() === "") {
editor.execCommand("indentLess");
} else {
editor.execCommand("delCharBefore");
}
},
});
$('.CodeMirror').css('font-size', fontSize+'px');
};

@ -732,13 +732,20 @@ MessengerUI, Messages) {
$('.cp-pad-not-pinned').remove();
return;
}
if (typeof(ApiConfig.inactiveTime) !== 'number') {
$('.cp-pad-not-pinned').remove();
return;
}
if ($('.cp-pad-not-pinned').length) { return; }
var pnpTitle = Messages._getKey('padNotPinned', ['','','','']);
var pnpMsg = Messages._getKey('padNotPinned', [
var pnpTitle = Messages._getKey('padNotPinnedVariable', ['','','','', ApiConfig.inactiveTime]);
var pnpMsg = Messages._getKey('padNotPinnedVariable', [
'<a href="' + o + '/login" class="cp-pnp-login" target="blank" title>',
'</a>',
'<a href="' + o + '/register" class="cp-pnp-register" target="blank" title>',
'</a>'
'</a>',
ApiConfig.inactiveTime
]);
var $msg = $('<span>', {
'class': 'cp-pad-not-pinned'

@ -1159,5 +1159,6 @@
"owner_request_accepted": "{0} a accepté votre offre de devenir propriétaire de <b>{1}</b>",
"owner_request_declined": "{0} a refusé votre offre de devenir propriétaire de <b>{1}</b>",
"owner_removed": "{0} a supprimé vos droits de propriétaire de <b>{1}</b>",
"owner_removedPending": "{0} a annulé l'offre de co-propriété reçue pour <b>{1}</b>"
"owner_removedPending": "{0} a annulé l'offre de co-propriété reçue pour <b>{1}</b>",
"padNotPinnedVariable": "Ce pad va expirer après {4} jours d'inactivité, {0}connectez-vous{1} ou {2}enregistrez-vous{3} pour le préserver."
}

@ -27,6 +27,7 @@
"onLogout": "You are logged out, {0}click here{1} to log in<br>or press <em>Escape</em> to access your pad in read-only mode.",
"wrongApp": "Unable to display the content of that realtime session in your browser. Please try to reload that page.",
"padNotPinned": "This pad will expire after 3 months of inactivity, {0}login{1} or {2}register{3} to preserve it.",
"padNotPinnedVariable": "This pad will expire after {4} days of inactivity, {0}login{1} or {2}register{3} to preserve it.",
"anonymousStoreDisabled": "The webmaster of this CryptPad instance has disabled the store for anonymous users. You have to log in to be able to use CryptDrive.",
"expiredError": "This pad has reached its expiration time and is no longer available.",
"deletedError": "This pad has been deleted by its owner and is no longer available.",
@ -435,6 +436,10 @@
"register_cancel": "Go back",
"register_warning": "Zero Knowledge means that we can't recover your data if you lose your password.",
"register_alreadyRegistered": "This user already exists, do you want to log in?",
"register_emailWarning0": "It looks like you submitted your email as your username.",
"register_emailWarning1": "You can do that if you want, but it won't be sent to our server.",
"register_emailWarning2": "You won't be able to reset your password using your email as you can with many other services.",
"register_emailWarning3": "If you understand and would like to use your email for your username anyway, click OK.",
"settings_cat_account": "Account",
"settings_cat_drive": "CryptDrive",
"settings_cat_cursor": "Cursor",

@ -1162,6 +1162,7 @@ define([
hide.push('collapseall');
}
containsFolder = true;
hide.push('share'); // XXX CONVERT
hide.push('openro');
hide.push('openincode');
hide.push('properties');
@ -3947,7 +3948,8 @@ define([
});
} else if (manager.isFolder(el)) { // Folder
// if folder is inside SF
if (manager.isInSharedFolder(paths[0].path)) {
return UI.warn('ERROR: Temporarily disabled'); // XXX CONVERT
/*if (manager.isInSharedFolder(paths[0].path)) {
return void UI.alert(Messages.convertFolderToSF_SFParent);
}
// if folder already contains SF
@ -3977,7 +3979,7 @@ define([
var owned = Util.isChecked($(convertContent).find('#cp-upload-owned'));
manager.convertFolderToSharedFolder(paths[0].path, owned, password, refresh);
});
}
}*/
} else { // File
data = manager.getFileData(el);
parsed = Hash.parsePadUrl(data.href);

@ -54,7 +54,9 @@ define([
var registering = false;
var test;
$register.click(function () {
var I_REALLY_WANT_TO_USE_MY_EMAIL_FOR_MY_USERNAME = false;
var registerClick = function () {
var uname = $uname.val();
var passwd = $passwd.val();
var confirmPassword = $confirm.val();
@ -62,6 +64,23 @@ define([
var shouldImport = $checkImport[0].checked;
var doesAccept = $checkAcceptTerms[0].checked;
if (Cred.isEmail(uname) && !I_REALLY_WANT_TO_USE_MY_EMAIL_FOR_MY_USERNAME) {
var emailWarning = [
Messages.register_emailWarning0,
Messages.register_emailWarning1,
Messages.register_emailWarning2,
Messages.register_emailWarning3,
].join('<br><br>');
Feedback.send("EMAIL_USERNAME_WARNING", true);
return void UI.confirm(emailWarning, function (yes) {
if (!yes) { return; }
I_REALLY_WANT_TO_USE_MY_EMAIL_FOR_MY_USERNAME = true;
registerClick();
}, {}, true);
}
/* basic validation */
if (!Cred.isLongEnoughPassword(passwd)) {
var warning = Messages._getKey('register_passwordTooShort', [
@ -104,7 +123,9 @@ define([
},
}, true);
}, 150);
});
};
$register.click(registerClick);
var clickRegister = Util.notAgainForAnother(function () {
$register.click();

Loading…
Cancel
Save