This project uses Jekyll. Jekyll requires Ruby. Sass and Ruby-oembed to preprocess and convert certain content to iframes. s3_website is mainly used to create redirects and push files to staging and production. Bundler is a package manager that makes versioning Ruby software like Jekyll a lot easier while also managing dependencies.
- Clone this repo locally.
- Jekyll requires the Ruby language. If you have a Mac, you've most likely already got Ruby. If you open up the Terminal application, and run the command
ruby --version
you can confirm this. Your Ruby version should be at least 2.0.0. If you've got that, you're all set. Otherwise, follow these instructions to install Ruby. cd
to the project root- Run
gem install bundler
to install the bundler - Run
bundle install
to install the build dependencies.
- If you receive any errors, try:
bundle install --path vendor/bundle
- Now, run
bundle exec jekyll serve build --watch
to start jekyll and watch for file changes. - Open a browser
http://127.0.0.1:4000/
- Simplify the new post creation process
- Create all new posts without a category
- Add all new blog posts to a single folder
-
Create all new posts from the
./blog/_posts/
folder -
The filename for all new posts need to include
- The date, in
yyyy-mm-dd-
format - The slug,
this-is-my-post
- The file extention,
.markdown
- The date, in
-
The full path to a new post should look like the following
./blog/_posts/2013-08-20-vote-now-fall-openstack-summit-presentations.markdown
-
To include a post in the list of featured post add the featured attribute with a value of true.
--- layout: post title: "My Post Title" slug: path-to-my-post author: My Name date: YYYY-MM-DD HH:MM:SS featured: true ---
The site uses 2 configuration files. one for staging and one for production:
_s3_prod_config/s3_website.yml
is used for production_s3_stage_config/s3_website.yml
is used for staging
The S3 configuration credentials are supplied with the following variables:
- s3_id: <%= ENV['S3_ID'] %>
- s3_secret: <%= ENV['S3_SECRET'] %>
- s3_bucket: <%= ENV['S3_BUCKET'] %>
As these credentials are confidential, they are instantiated in variables and are not part of the repository.
These credentials should be added to the following files for production and staging respectively:
./_s3_prod_config/s3_private_config.sh
./_s3_stage_config/s3_private_config.sh
See the following files for an example:
./_s3_prod_config/s3_private_config.sh.dist
./_s3_stage_config/s3_private_config.sh.dist
The site uses s3_website to deploy files to S3. s3_website will download a jar file the first time you run it. The following commands will run as a dry-run, enter 'y' to deploy:
- Run
sh _deploy/deploy.sh stage
to push to staging - Run
sh _deploy/deploy.sh production
to push to production
s3_website needs both Ruby and Java to run. (S3_website is partly written in Scala, hence the need for Java.) The site uses s3_website to deploy files.
The configuration files are stored in _s3_prod_config
and _s3_stage_config
.
In order to change the S3 bucket, update s3_website.yml
- All project assets (pdf, sheets and images) reside within the jekyll assets folder
assets
- Plugins are under
_plugins
- The
_includes
folder has the different partials for the site such as:- head
- header.html
- pagination.html
- search.html
- The
_layouts
folder has the different layouts for the site such as:- a regular page - page.html
- a single post - post.html
- authors page - single_author.html
- The
blog
folder has all the posts, the sub folders are use to replicate the categories from the main blog.- Example: cloud-computing/_posts/2015-08-11-cloud-youre-doing-it-wrong.markdown => HOST/blog/cloud-computing/cloud-youre-doing-it-wrong/
First you'll want to create your Amazon S3 bucket through the appropriate Amazon webpage. Please make sure to note the full host URL assigned to the bucket you just created -- for example, 'files.example.com'.
- Login to your CloudFlare account.
- From the dropdown menu on the top left, select your domain. Select the DNS settings tab.
- Add a CNAME record to your AWS bucket.
If your domain is "example.com" and you want to use the CNAME "files" you'll need to make sure the S3 bucket name is "files.example.com". Amazon requires that the CNAME match the bucket name.
--Configuring CORS (Cross Origin Resource Sharing) directions from Amazon:
Configuring your bucket for CORS is easy. To get started, open the Amazon S3 Management Console, and follow these simple steps:
- Right click on your Amazon S3 bucket and open the “Properties” pane.
- Under the “Permissions” tab, click the “Add CORS configuration” button to add a new CORS configuration. You can then specify the websites (e.g., "mywebsite.com") that should have access to your bucket, and the specific HTTP request methods (e.g., “GET”) you wish to allow.
- Click Save.
CloudFlare supports CORS and operates in the following way:
- The CloudFlare CDN identifies cache items based on the Host Header + Origin Header + Path and Query, which supports different objects using the same host header, but different origin headers
- CloudFlare passes Access-Control-Allow-Origin header through unaltered from the origin server to the browser
We have two S3 buckets on AWS, but the site content lives in just the canonical buckets. The other bucket (non-canonical) redirects all requests to the canonical.
Use these settings for the canonical domain.
Use these settings for the non-canonical domain.
This site uses GitHub Flow workflow for contributions.
Read more about GitHub Flow here.
Here is a high-level overview of the process:
- Fork the repository
- Create a branch using the
git checkout -b $BRANCH_NAME
. Replace$BRANCH_NAME
with your branch name - Add commits to your branch using the
git add .
andgit commit -m ""
commands. Push your commits to your branch withgit push origin $BRANCH_NAME
- Open a Pull Request using the https://help.github.com/articles/using-pull-requests/GitHub UI.
- Discuss and review your code in the GitHub UI
- Once your Pull Request has been reviewed and approved, one of the site owners will merge and deploy your Pull Request