diff --git a/blog/2013-02-26-apache_cloudstack_weekly_news_25.md b/blog/2013-02-26-apache_cloudstack_weekly_news_25.md index 12b12ed00d..1d99338755 100644 --- a/blog/2013-02-26-apache_cloudstack_weekly_news_25.md +++ b/blog/2013-02-26-apache_cloudstack_weekly_news_25.md @@ -43,7 +43,7 @@ possible.

QA Scrum Meeting Minutes
-

The QA Scrum meeting minutes for 18 February 2013 sent to the mailing list.

+

The QA Scrum meeting minutes for 18 February 2013 sent to the mailing list.

Weekly IRC Meeting Minutes
diff --git a/blog/2013-03-05-apache_cloudstack_weekly_news_41.md b/blog/2013-03-05-apache_cloudstack_weekly_news_41.md index 472452d5d7..3ec755b1b9 100644 --- a/blog/2013-03-05-apache_cloudstack_weekly_news_41.md +++ b/blog/2013-03-05-apache_cloudstack_weekly_news_41.md @@ -32,7 +32,7 @@ code developed elsewhere.

Rohit Yadav shared "that the do-it-yourself systemvm appliance feature works for me, for Xen,":

-

There is one catch though, VirtualBox exports VHD appliance which is said to be compliant with HyperV. I thought we may need to do something for Xen separately, so I followed and found a way. The "way" is to export a raw disk image and convert it to a VHD 1 but the problem is the VHD created from that "way" fails when vhd-util tries to scan for any dependent VHDs (parents etc.), I don't know what's the reason.

+

There is one catch though, VirtualBox exports VHD appliance which is said to be compliant with HyperV. I thought we may need to do something for Xen separately, so I followed and found a way. The "way" is to export a raw disk image and convert it to a VHD 1 but the problem is the VHD created from that "way" fails when vhd-util tries to scan for any dependent VHDs (parents etc.), I don't know what's the reason.

Read the rest of the thread if you have an interest in creating custom system VMs for CloudStack.

@@ -68,7 +68,7 @@ code developed elsewhere.

API Throttling
-

Parth Jagirdar has started a discuss thread about API throttling. "API throttling number can be set to anything at this point. Suggestions here is to have this number set to a value that is 'greater than' number of API that can be fired by any potential action on UI." (Note, Parth then sent out a follow-up email to correct the initial subject line from [DISCUSS} to DISCUSS, but all relevant discussion has happened in the original thread. It's probably not necessary to send a follow-up in those situations and may fragment the conversation.)

+

Parth Jagirdar has started a discuss thread about API throttling. "API throttling number can be set to anything at this point. Suggestions here is to have this number set to a value that is 'greater than' number of API that can be fired by any potential action on UI." (Note, Parth then sent out a follow-up email to correct the initial subject line from [DISCUSS} to DISCUSS, but all relevant discussion has happened in the original thread. It's probably not necessary to send a follow-up in those situations and may fragment the conversation.)

Branch Stability Status
diff --git a/blog/2013-03-12-apache_cloudstack_weekly_news_111.md b/blog/2013-03-12-apache_cloudstack_weekly_news_111.md index 9322cf2557..3a30ffec11 100644 --- a/blog/2013-03-12-apache_cloudstack_weekly_news_111.md +++ b/blog/2013-03-12-apache_cloudstack_weekly_news_111.md @@ -43,7 +43,7 @@ inefficient for certain primary storage types, for example if you end up creatin
Build Verification Test (BVT) for CloudStack Checkins
-

Alex Huang proposed building a BVT system to "ensure that checkins do not break the master branch."

+

Alex Huang proposed building a BVT system to "ensure that checkins do not break the master branch."

After a fair amount of discussion, Chip Childers responded, saying that the first step to getting Gerrit is "for us to agree to using it and to be able to clearly articulate why. Without being able to explain our issue, we'll be questioned about jumping to a tool-based solution by the infra team."

diff --git a/blog/2013-07-02-apache_cloudstack_weekly_news_12.md b/blog/2013-07-02-apache_cloudstack_weekly_news_12.md index 064fe43090..e40df2282d 100644 --- a/blog/2013-07-02-apache_cloudstack_weekly_news_12.md +++ b/blog/2013-07-02-apache_cloudstack_weekly_news_12.md @@ -111,7 +111,7 @@ slug: apache_cloudstack_weekly_news_12

After complaints that the BVT environment was broken, Alex Huang did some investigating to identify the root cause and raise a suggest on how BVT testing should be dealt with in the future.

-

After Dave's complain in the vmsync MERGE thread about BVT in horrible shape on master, I went around to figure out what exactly happened. The best I can figure is that after a certain merge (I will leave out which merge as that's not important), BVT no longer runs automatically. It was promised to be fixed and there are people who are actively fixing it but it's been in this way for about two weeks. People running BVTs are working around the problem but it's not automated anymore and so it's no longer running on master. I understand people are nice and tried to be accommodating to other people by working around the problem but sometimes we just have to be an arse. So let me be that arse...

+

After Dave's complain in the vmsync MERGE thread about BVT in horrible shape on master, I went around to figure out what exactly happened. The best I can figure is that after a certain merge (I will leave out which merge as that's not important), BVT no longer runs automatically. It was promised to be fixed and there are people who are actively fixing it but it's been in this way for about two weeks. People running BVTs are working around the problem but it's not automated anymore and so it's no longer running on master. I understand people are nice and tried to be accommodating to other people by working around the problem but sometimes we just have to be an arse. So let me be that arse...

New Rule....
If BVT or automated regression tests break on master or any release branch, we revert all commits that broke it. It doesn't matter if they promise to fix it within the next hour. If it's broken, the release manager will revert the commits and developers must resubmit. It sounds mean but it's the only way this problem can be fixed.

diff --git a/blog/2016-02-05-two_late_announced_security_advisories.md b/blog/2016-02-05-two_late_announced_security_advisories.md index 865f769fbc..f8634bc50a 100644 --- a/blog/2016-02-05-two_late_announced_security_advisories.md +++ b/blog/2016-02-05-two_late_announced_security_advisories.md @@ -10,10 +10,10 @@ slug: two_late_announced_security_advisories

While these vulnerabilities are of moderate and low severity (respectively), the reason for this post is because the advisories were announced approximately 5 months after the first release of the patches in 4.5.2. This is personally embarrassing, unacceptable, and in a more severe case could be downright dangerous.

-

What happened?

+

What happened?

The CloudStack security team worked through the related vulnerabilities through the summer of 2015. We had advisory drafts, patches, and mitigations all ready well before the release. Far enough ahead, actually, that we forgot about the release and weren't paying attention to the release (at least I wasn't - I know others were), and didn't send out the advisories at the appropriate time. Part of this is due to me having become an unofficial lead/spokesperson for the security team; In the past there has been at least one occasion when others released advisories when I was not available, but usually I'm coordinating issues and publishing announcements.

Luckily, the CloudStack Security Team works with and under the direction of the ASF security team. During one of their periodic reviews, they noticed CloudStack had loose ends on these two advisories, and asked for an update. Earlier today I realized the advisories had not been released, so here we are.

-

How will we improve?

+

How will we improve?

Obviously, we don't want to be in this situation again. Here's some steps we're taking to minimize the chance of a repeat performance: