Newer Older
1 2 3 4 5 6 7 8 9
# Document Deposit Assistant
The *Document Deposit Assistant* (DDA) is a web application which is able to import massive amounts of content and their metadata from a variety of data sources into a target repository.
To accomplish this aim, DDA consists of two complementary services:
* a web-based interface, where content providers (e.g. publishers, libraries, repository managers) are guided through a wizard, answering easily understandable questions to their content management infrastructure (e.g. which software they use, such as the DSpace institutional application).
* a service which uses the answers elicited from the wizard in order to connect to the content management infrastructure or process uploaded datadumps, harmonize metadata, and finally import it into the target repository.

## DDA and DSpace
Currently, one DDA installation supports one target DSpace 5+ repository installation. As a DDA installation is interacting with its target repository via REST, both can be deployed and restarted independently.

Once a content provider has successfully used DDA to import a batch of content to the target DSpace repository, this content will land in the importing collection's  *XMLWorkflow* task pool, where that collection's editors and reviewers will have the chance to do their usual business of validating and improving each submission before archiving (or rejecting) it.

12 13
## Initial setup
DDA is currently focused on working with a DSpace 5+ installation. In particular, it requires a running [DSpace REST endpoint]( with [additional endpoints]( DSpace must be running with [XMLWorkflow]( (not the XML-less *Workflow*).

15 16 17 18
### Set properties for your local installation
Edit file `config.yml` and set property values according to your specific environment.

### Create a *Document Deposit Assistant* DSpace user
DDA will import documents to DSpace as a registered DSpace user. To create a new DDA user account within DSpace, first log in with administrator privileges. Then select *Access Control* -> *People*. Click *Click here to add a new E-Person*. Provide a valid and unique e-mail address, provide as first name "Document", as last name "Deposit Assistant", and have *Can Log In* selected. Click *Create E-Person*. Back in the *E-person management* interface, search for e-people with a string "Deposit Assistant", select the correct *Document Deposit Assistant* e-person from the results, and click *Login as E-Person* (in case it is available) or *Reset Password* in order to provide this user a password.
20 21

### Create a *Document Deposit Assistant* DSpace collection
23 24
DDA needs to know about a DSpace *collection* to which it can import its processed new items to.

25 26
In your DSpace installation, we suggest to create a new DSpace collection exclusively for DDA imports. This allows you to easily wipe DDA-supplied imports in case something went wrong.
While being logged in as a DSpace administrator, click on *Browse* -> *Communities & Collections* in order to get the *community list* overview. Either create a new community or select a community which you want the *Document Deposit Assistant* collection to be part of, and click *create Collection*. Provide a meaningful name such as *Document Deposit Assistant* and click *Create*.
27 28
You will get into the *Edit Collection* dialog. On the *Assign Roles* tab, within the *submitters* section, click *Create...*. This will create a new group which is granted submitter rights to this collection; and you will be brought to the membership dialog for this group. Within this dialog, have a look at the headline. It should be of the form: `Group Editor: COLLECTION_XXX_SUBMIT (id: YYY)`. Keep note of the `XXX` part, as this is the collection *ID* (not collection *handle*) that we will require later. On this submitter group membership dialog, search for e-people with a string "Deposit Assistant", identify the correct *Document Deposit Assistant* e-person from the results, click on its *Add* button, and click *Save* to finalize this step.

### Create the *Document Deposit Assistant* reference metadatum field
30 31 32 33 34
In order for tracking and uniquely identifying some publication between DDA and the target repository, you have to set up a new kind of metadatum in your DSpace installation.
That metadatum has the key `internal.dda.reference`. While being logged in as a DSpace administrator, in the *Registries* menu section -> click *Metadata*. You will land in the *Metadata registry* overview. In case there is no namespace entry for *Internal* yet, add this new schema with dummy *namespace*=`internal`, and *name*=`internal`, then click *Add new schema*.
Once this entry exists, click on its name `internal`. On the *Metadata Schema: "internal"* page, provide in the first input field 
`dda` and in the second input field `reference`, then click *Add new metadata field*.

35 36 37
### *Document Deposit Assistant* service daemon installation
DDA is a Java-based webservice. It also serves out a web browser user interface (using HTML and AngularJS).
Besides starting DDA directly from the command line (by running `mvn` from DDA's source code root directory), it is possible to setup DDA as a long running service daemon that persists transactional application data (user accounts, settings, in-transit publication metadata, etc.) to a database - in short, follow these steps to set up your own "production-ready" DDA instance.

In the following subsections, we will install DDA on an Ubuntu Linux machine. We will first setup the MySQL database, then prepare the filesystem, and finally install DDA as a Unix service which will automatically start up and shut down your DDA instance during Linux boot-up and shutdown, respectively.

41 42
#### Database setup
DDA uses MySQL for persisting data. Assuming a MySQL server is running, and the `mysql` client tool exists, run the following commands in order to set up the DDA MySQL database in a state as expected by DDA's `staging` profile:

44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76
# The following command creates a new database called "dda" within your MySQL database engine.
# When prompted with *Enter password:* the mysql command expects you to provide the password of MySQL user *root*.

mysql --user=root --password --host=localhost --port=3306 --protocol=TCP --verbose --execute="create database if not exists dda character set utf8 collate utf8_general_ci;"

# The following command creates a new user called "dda". Replace ${MYSQL_USER_DDA_PASSWORD} with a secret password and remember it.
# When prompted with *Enter password:* the mysql command expects you to provide the password of MySQL user *root*.

mysql --user=root --password --host=localhost --port=3306 --protocol=TCP --verbose --execute="create user 'dda'@'localhost' identified by '${MYSQL_USER_DDA_PASSWORD}';"

# The following command grants the new user "dda" all database privileges on the "dda" database.
# When prompted with *Enter password:* the mysql command expects you to provide the password of MySQL user *root*.

mysql --user=root --password --host=localhost --port=3306 --protocol=TCP --verbose --execute="grant all privileges on dda.* to 'dda'@'localhost'; flush privileges;"

#### Linux user and filesystem setup
Create a `dda` Linux user:
adduser --system --no-create-home --disabled-login --group dda

# Created this way, the `dda` user will have no login shell in order to increase security
# You will still be able to gain a shell for this user, by running the following command
# sudo -u dda bash 

Now build a DDA production release. Run the following command:
mvn clean package -Pprod -DskipTests

78 79 80 81
This will create the executable web application artifact located at `target/dda-wizard.war`.

Next, create a directory where the DDA binary will reside:
sudo sh -c "mkdir /srv/dda"

86 87 88 89 90 91 92 93 94 95 96 97
* `target/dda-wizard.war` to `/srv/dda/dda-wizard.war`, 
* `etc/conf-files/prod/dda-wizard.conf` to `/srv/dda/dda-wizard.conf`, and
* `etc/conf-files/prod/application-prod.yml` to `/srv/dda/application-prod.yml`

Update file `/srv/dda/application-prod.yml` to reflect your production environment - that file has further helpful comments inside. In particular, provide correct values for the following keys:
* `spring.datasource.password`: this is the password of MySQL user `dda`.
* `spring.mail.(host, port, username, password`: these are settings for an SMTP mail server. Events such as user registration will send out e-mails using these settings.
* `server.port`: a TCP port that this DDA instance will listen on. Use a port which is not in use yet on your machine (e.g. 8081, if so).
* `ingester.endpoint`: your DSpace installation's REST endpoint, e.g. ``
* ``: the e-mail address of the DSpace DDA user which you created in step [*Creating a Document Deposit Assistant DSpace user*](#creating-a-document-deposit-assistant-dspace-user)
* `ingester.password`: the password of aforementioned DSpace user
* `ingester.targetCollection`: the DSpace collection *ID* (not collection *handle*) of DDA's import collection which you created in step [*Creating a Document Deposit Assistant DSpace collection*](#creating-a-document-deposit-assistant-dspace-collection).
98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137

Set correct directory and file permissions:
sudo sh -c "chown -R dda:dda /srv/dda/ && chmod -R u=rx,g=,o= /srv/dda"
# This will set ownership of directory /srv/dda and all its content to user and group `dda`
# and set minimally required access rights in order to further increase security.

#### Service registration
Assuming you want to install DDA as a *SystemV init.d* service, first create a symlink from `/etc/init.d/dda-wizard` to the executable WAR file:
sudo ln -s /srv/dda/dda-wizard.war /etc/init.d/dda-wizard

Second, register DDA to start up and shut down during appropriate Linux lifecycle phases:
sudo update-rc.d dda-wizard defaults

# if you later want to deactivate DDA from starting automatically, run
# sudo update-rc.d -f dda-wizard remove

Now start up DDA:
sudo service dda-wizard start

# to shut down DDA, run `sudo service dda-wizard stop`

Start following the DDA log:
sudo tail -F /var/log/dda-wizard.log

The logs will let you know about the local and external IP addresses and ports on which DDA is listening, e.g. `` and ``, respectively.

* Try curling the local address: `curl`. You should get DDA's landing page HTML content returned on `stdout`.
* Try visiting the external address with your browser.

#### Reverse proxy configuration
138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153
In the following, it is assumed that you are using Apache2 as a reverse proxy on your DDA-hosting server machine, and that you want to have DDA available at https://${YOUR_OWN_DDA_HOST_NAME}/ .

Copy file `etc/apache-site/` to your's directory `/etc/apache2/sites-available/${YOUR_OWN_DDA_HOST_NAME}.conf`.

Edit this file to reflect your needs:
* Change all strings `` to ${YOUR_OWN_DDA_HOST_NAME}.
* Change all `8081` ports to the port your DDA instance is listening on - as configured above in file `/srv/dda/application-prod.yml` under the `server.port` property.
* Change the `SSLCertificateFile`, `SSLCertificateKeyFile`, `SSLCertificateChainFile`, and `SSLCACertificateFile` properties to point to your site's SSL certificate, certificate, key, and certificate chain files.

This Apache site configuration follows current web best practices by redirecting all insecure HTTP connections to secure HTTP connections. Also, it makes sure to only use cryptographic primitives that are still considered secure as of this writing.

Enable this site by executing the following commands:
sudo a2ensite ${YOUR_OWN_DDA_HOST_NAME}.conf
sudo service apache2 reload

#### DDA user passwords
156 157 158 159
For security reasons, change the default DDA user passwords.
* user `user`, default password `user`.
* user `admin`, default password `admin`.
* user `developer`, default password `developer`.

161 162 163
The jHipster-generated users `anonymousUser` and `system` [do not need to have their passwords changed](

To change these default passwords, visit DDA's web interface, sign in with each of these default credentials, then click *Account* -> *Password*, provide a unique and strong password and click *Save*.

165 166
Also, change the default DDA users' e-mail addresses. This allows you to recover some user's password easily in case you forgot it. To change the e-mail address, sign in with each default DDA user, then click *Account* -> *Settings*, provide a valid and unique e-mail address and click *Save*.

167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186
## Development environment setup
Install the following software development tools:
* Cygwin
 * install the following Cygwin packages: `git`, `openssh`, `nano`, `pgrep`
* Java SE Development Kit 8 (JDK8)
* Eclipse IDE for Java EE Developers
* Node.js
 * install the following npm packages: `npm install -g yo generator-jhipster@2.27.1 grunt-cli bower`
* Maven 

Set environment variables according to and put Maven's `bin` directory on your computer`s `${PATH}`.

Start Cygwin and git-clone the dda-wizard project:
cd ~/git/
git clone

Start Eclipse. Select *File* -> *Import...* -> *Maven* -> *Existing Maven Projects* -> *Browse* to `~/git/dda-wizard` -> Click *OK* -> Click *Finish*. Wait for Eclipse to finish importing the project - in case Eclipse asks for installing additional *Maven plugin connectors*, agree to it.

Install npm dependencies once:
188 189 190 191 192 193 194 195 196 197 198 199
cd ~/git/dda-wizard/
npm install

To start up a DDA instance on your development machine, start Cygwin and run Maven in the dda source code directory:
cd ~/git/dda-wizard/

Bower is used to manage CSS and JavaScript dependencies. You can upgrade dependencies by specifying a newer version in file `bower.json`. You can also run `bower update` and `bower install` to manage dependencies.
201 202
Add the `-h` flag on any command to see how you can use it. For example, `bower update -h`.


Gerrit Hübbers's avatar
Gerrit Hübbers committed
204 205
## Staging
### Initial setup
206 207
The staging environment shall be as close as possible to the production environment. Therefore, run the same commands as listed in the [Database setup][#database-setup] chapter.

### Debugging staging environment
The staging environment is set up in such a way that it allows connecting a remote Java debugger (see file `etc/conf-files/staging/dda-wizard.conf`). You can connect to it like so:
210 211
* First, ssh tunnel port-forward the remote debugger port with `ssh -L 8002:localhost:8002 svko-dda-test.gesis.intra`.
* Then, from Eclipse, create a new debug configuration with parameters `localhost` and port `8002`. Click connect.
Gerrit Hübbers's avatar
Gerrit Hübbers committed
212 213 214 215

### Running DDA with the `staging` profile on a development machine
To build a staging version on your development machine, run `mvn package -Pstaging -DskipTests=true`. To run this staging version on your development machine, run `java -jar target/dda-wizard.war`.

216 217 218 219 220 221 222 223 224 225 226 227 228 229 230
# Building for production

To optimize the DDA client for production, run:

    mvn -Pprod clean package

This will concatenate and minify CSS and JavaScript files. It will also modify `index.html` so it references
these new files.

To ensure everything worked, run:

    java -jar target/*.war

Then navigate to [http://localhost:8080](http://localhost:8080) in your browser.

Gerrit Hübbers's avatar
Gerrit Hübbers committed
## Testing
Unit tests are run by [Karma]( and written with [Jasmine]( They are located in `src/test/javascript` and can be run with:
233 234

    grunt test
To only test the wizard, you can run:
237 238

    mvn test -Dtest=org.gesis.dda.wizard.**.*Test

Gerrit Hübbers's avatar
Gerrit Hübbers committed
240 241 242
## Development
### Development methodology
##### Fixing bugs and building features on dedicated branches
243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298
It is a best practice to fix a bug and develop a new feature on a dedicated git branch, then, after finishing that task, merging the made changes back into the *master* branch.
* For the whole development group, this helps in maintaining a working DDA Wizard version in the *master* branch - it will never contain a half-baked version.
* For the individual developer(s) working on the branch, it helps to develop on their task with a known DDA Wizard git project state, and changes on the *master* made concurrently by others won't interfere with their work.
* The finalizing merge into master allows to see the *set of changes* made to the whole DDA Wizard git project that are required to show *what* had to be changed in order to fulfill the feature/bugfix.

Follow the following steps in order to work with branches:

cd ~/git/dda-wizard/

git checkout master

# get the current DDA Wizard repository state into your local repository
git pull

# create a new FEATURE or BUGFIX branch and give it a meaningful name
git checkout -b FEATURE-fancy-feature

# ... make modifications on this branch FEATURE-fancy-feature
# commit these changes,
# and in case a work-in-progress at the end of the day leaves your branch in an inconsistent, nonworking state,
# then add a 'WIP' work in progress prefix for references.
git add X Y Z
git commit -m "WIP foo"

# save those changes also on the upstream branch
git push
# maybe on first branch push, git will ask you to set the upstream branch... 
# ... in that case, just copy and paste the set-upstream command as provided by git

# make some more edits, adds and commits on the local branch...

# you now think you have finished all work on this branch, git push your branch a final time ...
# DDA Wizard's Jenkins will deploy your branch to dda-wizard.svko-dda-test.gesis.intra ...
# Have all feature/bugfix stakeholders (e.g. Agathe) play with the svko-dda-test instance and give you feedback

# Assuming now that you and all others are happy with what this branch provides, merge that branch into master ...
# First you checkout your local master branch
git checkout master

# fetch and merge latest origin/master commits into your local master branch:
git pull

# now local master branch is up-to-date

# now merge local FEATURE-fancy-feature into your local master:
git merge --no-ff FEATURE-fancy-feature
# in case of merge conflicts, resolve the conflicts (hint: `git mergetool`)
git commit # that's right, don't provide a commit message. Git will generate one for you.

# assuming "merge --no-ff ..." worked, push this commit to remote repository...

git push

# make a final quality assurance test on svko-dda-test, and make sure that both your new changes and all previously developed features and bugfixes work smoothly together...

Gerrit Hübbers's avatar
Gerrit Hübbers committed
### In-memory database
301 302
You can interact with the h2 in-memory database by visiting its web interface at [http://localhost:8080/h2-console](http://localhost:8080/h2-console). As *JDBC URL*, provide `jdbc:h2:mem:dda`. As *User Name*, provide DDA. Keep *Password* empty.

Gerrit Hübbers's avatar
Gerrit Hübbers committed
### Debugging
304 305
The `dev` profile activates Java debugging capability. You can connect a client debugger by pointing it to `localhost:5005`.

Gerrit Hübbers's avatar
Gerrit Hübbers committed
### Project source filesystem layout
307 308 309 310 311 312 313
    /       <--- development- and build- relevant files, including this README.MD, pom.xml, package.json, Gruntfile.js ... not part of the final build artifact
    |- src/
        |- main/
            |- java/      <--- dda-wizard Java source files
            |- resources/ <--- in the final build artifact, its content will land in /WEB-INF/classes/. This content won't be served out as files via HTTP.
            |- scss/      <--- Gruntfile.js configures the grunt-sass task to process SASS stylesheets in this directory
            |- webapp/    <--- in the final build artifact, its content will land in /. This content will be served out as files via HTTP!

Gerrit Hübbers's avatar
Gerrit Hübbers committed
315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371
### Build process
DDA is built with Maven. `pom.xml` configures the default `mvn` behavior to run the `spring-boot:run` goal and use the `dev` Maven profile.

#### Building with the default Maven `dev` profile
For the default maven `dev` profile, during the `generate-resources` phase, the `yeoman-maven-plugin` runs the following commands in *this* project's root directory: `npm install && bower install --no-color && grunt sass:server --force`. Let's take a look at each of these frontend-specific build steps:

##### `npm install`
`npm install` investigates file `/package.json` and downloads all (transitive) `dependencies` and `devDependencies` to directory `/node_modules`

##### `bower install --no-color`
`bower install --no-color` investigates file `/bower.json` and sees `appPath` configured to be `src/main/webapp`. Therefore, bower downloads all (transitive) `dependencies` and `devDependencies` to directory `/src/main/webapp/bower_components`

##### `grunt sass:server --force`
`grunt sass:server --force`: grunt interprets file `Gruntfile.js`. It uses [`load-grunt-tasks`]( to automatically find and register all grunt tasks in /node_modules/* by looking for the default `grunt-*` pattern, including the `sass` task. `Gruntfile.js` configures the `sass` task to have a [target]( `sass:server`, which configures `grunt-sass` to find *DDA*'s source Sass stylesheets at `/src/main/scss/`, to find referenced `@imports` in `/src/main/webapp/bower_components/` (using underlying [`node-sass` option `includePaths`](, and to put the generated `.css` output files to `/src/main/webapp/assets/styles/`.

Having this configured, grunt executes this Sass generation.

#### Building with the Maven `staging` profile
For the staging environment DDA is built with the Maven `staging` profile. Let's have a look what is happening when running `mvn clean package -Pstaging`.

The`yeoman-maven-plugin` runs the following commands in *this* project's root directory: `npm install && bower install --no-color && grunt test --no-color && grunt build --no-color`.

##### `grunt test --no-color`
`Gruntfile.js` registers a task `test`. It depends on the following subtasks:
* `clean:server`: this `grunt-contrib-clean` task will delete the [`.tmp` directory](
* `wiredep:test`: the `grunt-wiredep` task will update file `src/test/javascript/karma.conf.js` to include all Bower components for Karma tests.
* `ngconstant:dev`: this `grunt-ng-constant` task will create a file `/src/main/webapp/scripts/app/app.constants.js` which acts as an Angular module providing two constants: `ENV=dev` and `VERSION=${POM_VERSION}`.
* `sass:server`. See the discussion earlier in *this* documentation. This task will take all SASS stylesheets from `/src/main/scss/` (and their transitive @import Bower dependencies), convert them to CSS, and place these CSS files into `src/main/webapp/assets/styles/`.
* `karma`: this `grunt-karma` task will use the previously updated configuration file `src/test/javascript/karma.conf.js` to configure Karma JavaScript tests:
  * it loads the following [Karma plugins]( `karma-script-launcher, karma-chrome-launcher, karma-html2js-preprocessor, karma-jasmine, karma-requirejs, karma-phantomjs-launcher, karma-coverage, karma-jenkins-reporter`.
  * it activates [Coverage](
  * it uses [Jasmine]( as the testing framework
  * it configures as reporters: [`dots`, `progress`](, [`jenkins` for XML JUnit format reports](, and `Publish JUnit test result report`.
  * it provides to the testing browser all of the following [`files`]( all bower components, all AngularJS frontend files, and almost all files in `/src/test/javascript/**`
`karma-jasmine` will start up the AngularJS application, will additionally set up all Jasmine helpers (located in `/src/test/javascript/spec/helpers/**`) and then run all Jasmine `describe(..)` tests. These tests are located in `/src/test/javascript/spec/**`.

##### `grunt build --no-color`
`Gruntfile.js` registers a task `build`. It depends on the following subtasks:
* `clean:dist`: the `grunt-contrib-clean` task will delete the [`.tmp/`]( and `/src/main/webapp/dist/` directories.
* `wiredep:app`: this `grunt-wiredep` task will update `/src/main/webapp/index.html` to include Bower JavaScript and Bower CSS dependencies. And will update `/src/main/scss/main.scss` to include Bower SCSS dependencies.
* `ngconstant:prod`: this `grunt-ng-constant` task will create a file `.tmp/scripts/app/app.constants.js` which acts as an Angular module providing two constants: `ENV=prod` and `VERSION=${POM_VERSION}`.
* `useminPrepare`: this [`grunt-usemin`]( task takes file `/src/main/webapp/index.html` and examines all its `build:js` and `build:css` blocks. It will then dynamically add to the Grunt configuration additional tasks targets, `concat:generated`, `uglifyjs:generated`, `cssmin:generated` and `autoprefixer:generated`.
* `ngtemplates`: this [`grunt-angular-templates`]( task takes all jHipster- and DDA-specific HTML files and generates a n HTML-minified, JavaScript-based templates file from it to location `/.tmp/templates/templates.js`.
* `sass:server`: See the discussion earlier in *this* documentation. This task will take all SASS stylesheets from `/src/main/scss/` (and their transitive `@import` Bower dependencies), convert them to CSS, and place these CSS files into `src/main/webapp/assets/styles/`.
* `imagemin`: this [`grunt-contrib-imagemin`]( task will take all JPEG images from directory `/src/main/webapp/assets/images/**`, minify them, and copy the results to `/src/main/webapp/dist/assets/images/`
* `svgmin`: this [`grunt-svgmin`]( behaves identical to aforementioned `imagemin` task, but for SVG images.
* `concat`: this [`grunt-contrib-concat`]( task will execute the previously generated `concat:generated` target (generated by `useminPrepare`). This target bundles all DDA-specific JavaScript files into a single temporary file `/.tmp/concat/scripts/app.js` ... and all Bower JavaScript dependencies in a single temporary file `/.tmp/concat/scripts/vendor.js`.
* `copy:fonts`: this [`grunt-contrib-copy`]( target copies all Bootstrap fonts to `/src/main/webapp/dist/assets/fonts/`
* `copy:dist`: this `grunt-contrib-copy` target copies from `/src/main/webapp/` all HTML files, all images, and all fonts verbatim to `/src/main/webapp/dist/`.
* `ngAnnotate`: the [grunt-ng-annotate]( task allows for expressing [AngularJS dependency annotations]( [differently](
* `cssmin`: this [`grunt-contrib-cssmin`]( task will execute the previously generated `cssmin:generated` target (generated by `useminPrepare`). `useminPrepare`'s `/src/main/webapp/index.html` analysis will have `cssmin:generated` take file `/src/main/webapp/assets/styles/main.css` (previously generated during the `sass:server` target), css-minify it, and place it to `/.tmp/cssmin/assets/styles/main.css`... Also, the `index.html` analysis will take all Bower CSS dependencies, concatenate them to one bundle, minify that bundle, and place that minified bundle in and place it to `/.tmp/cssmin/assets/styles/vendor.css`.
* `autoprefixer`: this [grunt-autoprefixer](``) task will execute the previously generated `autoprefixer:generated` target (generated by `useminPrepare`). This target takes the outputs from the previous usemin-css step target `cssmin:generated` and prefixes CSS properties with vendor prefixes. As `autoprefixer` is the last step in the usemin CSS pipeline, the final usemin CSS artifacts will land in the paths as specified with `useminPrepare.options.dest + $(build:css-annotations-found-in-index.html)`: `/src/main/webapp/dist/assets/styles/main.css` and `/src/main/webapp/dist/assets/styles/vendor.css`.
* `uglify`: this [`grunt-contrib-uglify`]( task will execute the previously generated `uglify(js):generated` target (generated by `useminPrepare`). This target takes the output from the previous usemin-js step target `concat:generated` and uglify-js-minifies it. As `uglify` is the last step in the usemin JS piepline, the final usemin JS artifacts will land in the paths as specified with `useminPrepare.options.dest + $(build:js-annotations-found-in-index.html)`: `/src/main/webapp/dist/scripts/app.js` and `/src/main/webapp/dist/scripts/vendor.js`.
* `rev`: this [grunt-rev]( task uses *revving* to rename JS, CSS, image, and font files in the `/src/main/webapp/dist/` directory.
* `usemin`: this [grunt-usemin]( task investigates all HTML files within the `/src/main/webapp/dist/` directory (previously copied there during the `copy:dist` target execution). The task will find references to unconcatenanted, unrevved assets (JS, CSS, images), then replace these references with the concatenated single-bundles-and-revved filenames.
* `htmlmin`: this [`grunt-contrib-htmlmin`]( task takes all `/src/main/webapp/dist/*.html` files and html-minifies them in-place.

372 373 374
# Walkthroughs and architectures

## Front-end AngularJS walkthrough
375 376 377 378 379
The Spring backend serves out, via HTTP, either the unprocessed or the Grunt-processed (i.e., minified, concatenated, revved, etc.) frontend artifacts. Whether unprocessed or processed artifacts are served out depends on which Spring profile is active: among other things, `@Configuration` class `org.gesis.dda.wizard.config.WebConfigurer` checks for an active `prod` Spring profile. If the `prod` profile is active, `WebConfigurer` will add an additional servlet filter called `StaticResourcesProductionFilter` to the filter chain. That filter forwards (server-side and therefore opaque to the visiting user) any frontend artifact request to the same address prefixed with `/dist` (see *Servlet 3.1* specification, *10.5 Directory Structure* for more information on how static Servlet directory content gets served out). During the Grunt build phase, processed frontend artifacts were put into directory `/dist/`.

If the `prod` profile is not active, requests for frontend artifacts will be served out normally, i.e. relative to the servlet's root directory.

Either way, an HTTP request for `/` will serve out the corresponding `index.html` artifact. `index.html` in turn references all JavaScript assets, including file `app.js`.
380 381 382 383 384 385 386 387 388 389 390 391

### `app.js`
`app.js` creates a new module, `ddaApp`, and provides all of its dependent modules. The `ddaApp` module:
* configures the *cache buster* service and *$httpProvider*'s CSRF token name.
* configures the abstract *root* state named `site`.
* configures the AngularUI view `navbar@` for the [root unnamed template `index.html`]( Also, `$stateProvider` is configured in such a way to `resolve`-inject a dependency named `authorize` - an alias for `Auth` service's `authorize()` method.
* configures `$httpProvider` interceptors which
  * in error conditions emit an error event on `$rootScope` (`errorhandler.interceptor.js`);
  * redirect to login and retry on CSRF-missing responses (`auth.interceptor.js`); and
  * trigger the `AlertService` if an `X-ddaApp-alert` response header is present (`notification.interceptor.js`).

Once all these modules have been loaded and configured, it runs an initialization function:
Hellmich, Christoph's avatar
Hellmich, Christoph committed
It sets `$rootScope`s `ENV` and `VERSION` (as generated to file `app.constants.js`).
393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411
It registered an AngularUI state change listener, which sets the requested `toState` in the `$rootScope` and introduces a router hook for further AngularUI route manipulation: The hook will check if the user is authenticated or not, probably route the user differently (e.g., redirecting a logged-in user from requested `login` state to `home` state), and possibly routing the user to `accessdenied` state in case they are missing the required authorization. Also, the application's routing behavior is augmented so that the window's title reflects the current route's `data.pageTitle`; and unknown routes get redirected to `/`.

AngularUI's router is configured by configuring the `$stateProvider`. The jHipster convention is to have a single `x.js` file for each unique state `x`. That `x.js` file then calls module `ddaApp` `config(..)` injector to set up this state.

This convention is used for example in file `main.js`: it configures a state `home` (inheriting from state `site`).
As state `home` configures its `url` to be `/` (or synonymously `index.html`), this state is the initial state.
`index.html` provides two AngularUI view placeholders: `navbar` and `content`. The `home` state fills the `content` view with `main.html` and uses the `MainController`. This controller adds to its `$scope` the user's account data promise and the `isAuthenticated()` method. All children scopes will inherit these properties.

`main.html` provides an *a href* to `#/login`. Clicking this link will command the AngularUI to go to state `login`. Also this route is conventionally configured in file `login.js`.

State `login`'s ancestors are `account -> site`. State `login` adds as [*custom state data*]( an empty `authorities` array and a custom `pageTitle`. Also, the `login` state fills into the `content` view `login.html` with the `LoginController`.

What's up with that `authorities` array?
In jHipster's convention, each state can have as state data an array `data.authorities`, containing entries for all authority roles that each is permitted to access this state. This rule got activated when `app.js` configured the AngularUI router to call `Auth.authorize()` whenever an `$stateChangeStart` event is received. If the `data.authorities` array is empty, then this means that no authorities are required - therefore `login` can always be accessed for unauthenticated visitors.

Let's assume we have successfully logged in and we are back at the route `/`. The `main` state configures for the `content@` view `templateUrl=main.html`; and (transitively via the parent `site` (`app.js`) state) for the `navbar@` view `templateUrl=navbar.html`.

Let's have a look at `navbar.html`: `NavbarController` provides to its scope the methods for checking `Principal.isAuthenticated()` and `Auth.logout()`. Therefore, these methods can be referenced within `navbar.html`, e.g. with the attributes `ng-click="logout()"` or `ng-switch="isAuthenticated()"`. Depending on the `isAuthenticated()` result, specific DOM elements are added or removed from the DOM. So for instance, only if `isAuthenticated()` evaluates to `false` will the *Sign in* and *Register* entries appear in the *Account* navbar section.

Hellmich, Christoph's avatar
Hellmich, Christoph committed
When logged in, the *Entities* navbar section will show only those entity types for which the currently logged-in user has permissions to interact with. This behavior is defined by the `has-authority` attribute directive (file `authority.directive.js`). That directive registers a listener (`scope.$watch(..)`). That listener is fired everytime a *digest cycle* is triggered by the AngularJS framework. Whenever `Principal.isAuthenticated()`'s evaluation result changes between two consecutive *digest cycles* (so either going from `true` to `false`; or from `false` to `true`) will that directive's behavior be executed (i.e., add or remove the `hidden` class).
413 414 415 416 417 418

Let's now assume we are logged in, we are currently on the root `home` state, then are about to select the *Bundles source* entry. Let's further assume we have a valid access-granting authority.

`navbar.html` has for the *Bundles source* declared the attribute `ui-sref="bundlesSource"`. That means that as soon as we click on *Bundles source*, the AngularUI router will start moving the router to state `bundlesSource`.

State `bundlesSource` is defined in file `bundlesSource.js`, with a `url=/bundlesSources`. The state hierarchy is `bundlesSource -> entity -> site`. Also this state fills the `content@` view with its own template `bundlesSources.html` and controller `BundlesSourceController`. That controller interacts with the `BundlesSource` $resource REST service (`bundlesSource.service.js`) - e.g., everytime this controller is activated, it will call `$scope.loadAll()`, concurrently populating the `$scope.bundlesSources` array with data returned from the remote REST endpoint.