Skip to content

Conversation

vivus-ignis
Copy link
Contributor

@vivus-ignis vivus-ignis commented Sep 2, 2025

As agreed with @Akendo, splitting this PR into smaller PRs.

What this PR does / why we need it: This consolidates python CI code from the main gardenlinux repository into the existing python module

Which issue(s) this PR fixes:
Fixes

Special notes for your reviewer:

Release creation code was tested on a gardenlinux repo fork: https://github.com/vivus-ignis/gardenlinux-dev/releases
As the bulk of the code was extracted from the gardenlinux/gardenlinux repository, this PR does not show what was changed there. Therefore attaching a diff here:

Diff
--- ../gardenlinux/.github/workflows/release_note.py	2025-09-11 17:11:05.606466756 +0200
+++ src/gardenlinux/github/__main__.py	2025-09-15 09:23:23.073549528 +0200
@@ -1,68 +1,62 @@
-#!/usr/bin/env python3
-
-from botocore import UNSIGNED
-from botocore.client import Config
 from gardenlinux.apt import DebsrcFile
+from gardenlinux.apt.package_repo_info import GardenLinuxRepo, compare_repo
 from gardenlinux.features import CName
 from gardenlinux.flavors import Parser as FlavorsParser
 from gardenlinux.s3 import S3Artifacts
 from pathlib import Path
 from yaml.loader import SafeLoader
 import argparse
-import boto3
 import gzip
 import json
 import os
 import re
 import requests
 import shutil
-import subprocess
 import sys
 from git import Repo
 import textwrap
 import yaml
 import urllib.request
 
-from get_kernelurls import get_kernel_urls
+from ..logger import LoggerSetup
 
+LOGGER = LoggerSetup.get_logger("gardenlinux.github")
 
-GARDENLINUX_GITHUB_RELEASE_BUCKET_NAME="gardenlinux-github-releases"
+GARDENLINUX_GITHUB_RELEASE_BUCKET_NAME = "gardenlinux-github-releases"
 
+REQUESTS_TIMEOUTS = (5, 30)  # connect, read
 
-cloud_fullname_dict = {
-    'ali': 'Alibaba Cloud',
-    'aws': 'Amazon Web Services',
-    'gcp': 'Google Cloud Platform',
-    'azure': 'Microsoft Azure',
-    'openstack': 'OpenStack',
-    'openstackbaremetal': 'OpenStack Baremetal'
+CLOUD_FULLNAME_DICT = {
+    "ali": "Alibaba Cloud",
+    "aws": "Amazon Web Services",
+    "gcp": "Google Cloud Platform",
+    "azure": "Microsoft Azure",
+    "openstack": "OpenStack",
+    "openstackbaremetal": "OpenStack Baremetal",
 }
 
 # https://github.com/gardenlinux/gardenlinux/issues/3044
 # Empty string is the 'legacy' variant with traditional root fs and still needed/supported
-IMAGE_VARIANTS = ['', '_usi', '_tpm2_trustedboot']
+IMAGE_VARIANTS = ["", "_usi", "_tpm2_trustedboot"]
 
 # Variant display names and order for consistent use across functions
-VARIANT_ORDER = ['legacy', 'usi', 'tpm2_trustedboot']
+VARIANT_ORDER = ["legacy", "usi", "tpm2_trustedboot"]
 VARIANT_NAMES = {
-    'legacy': 'Default',
-    'usi': 'USI (Unified System Image)',
-    'tpm2_trustedboot': 'TPM2 Trusted Boot'
+    "legacy": "Default",
+    "usi": "USI (Unified System Image)",
+    "tpm2_trustedboot": "TPM2 Trusted Boot",
 }
 
 # Mapping from image variant suffixes to variant keys
 VARIANT_SUFFIX_MAP = {
-    '': 'legacy',
-    '_usi': 'usi',
-    '_tpm2_trustedboot': 'tpm2_trustedboot'
+    "": "legacy",
+    "_usi": "usi",
+    "_tpm2_trustedboot": "tpm2_trustedboot",
 }
 
 # Short display names for table view
-VARIANT_TABLE_NAMES = {
-    'legacy': 'Default',
-    'usi': 'USI',
-    'tpm2_trustedboot': 'TPM2'
-}
+VARIANT_TABLE_NAMES = {"legacy": "Default", "usi": "USI", "tpm2_trustedboot": "TPM2"}
+
 
 def get_variant_from_flavor(flavor_name):
     """
@@ -70,12 +64,13 @@
     Returns the variant key (e.g., 'legacy', 'usi', 'tpm2_trustedboot').
     """
     match flavor_name:
-        case name if '_usi' in name:
-            return 'usi'
-        case name if '_tpm2_trustedboot' in name:
-            return 'tpm2_trustedboot'
+        case name if "_usi" in name:
+            return "usi"
+        case name if "_tpm2_trustedboot" in name:
+            return "tpm2_trustedboot"
         case _:
-            return 'legacy'
+            return "legacy"
+
 
 def get_platform_release_note_data(metadata, platform):
     """
@@ -83,168 +78,159 @@
     Returns the structured data dictionary.
     """
     match platform:
-        case 'ali':
+        case "ali":
             return _ali_release_note(metadata)
-        case 'aws':
+        case "aws":
             return _aws_release_note(metadata)
-        case 'gcp':
+        case "gcp":
             return _gcp_release_note(metadata)
-        case 'azure':
+        case "azure":
             return _azure_release_note(metadata)
-        case 'openstack':
+        case "openstack":
             return _openstack_release_note(metadata)
-        case 'openstackbaremetal':
+        case "openstackbaremetal":
             return _openstackbaremetal_release_note(metadata)
         case _:
-            print(f"unknown platform {platform}")
+            LOGGER.error(f"unknown platform {platform}")
             return None
 
+
 def get_file_extension_for_platform(platform):
     """
     Get the correct file extension for a given platform.
     """
     match platform:
-        case 'ali':
-            return '.qcow2'
-        case 'gcp':
-            return '.gcpimage.tar.gz'
-        case 'azure':
-            return '.vhd'
-        case 'aws' | 'openstack' | 'openstackbaremetal':
-            return '.raw'
+        case "ali":
+            return ".qcow2"
+        case "gcp":
+            return ".gcpimage.tar.gz"
+        case "azure":
+            return ".vhd"
+        case "aws" | "openstack" | "openstackbaremetal":
+            return ".raw"
         case _:
-            return '.raw'  # Default fallback
+            return ".raw"  # Default fallback
+
 
 def get_platform_display_name(platform):
     """
     Get the display name for a platform.
     """
     match platform:
-        case 'ali' | 'openstackbaremetal' | 'openstack' | 'azure' | 'gcp' | 'aws':
-            return cloud_fullname_dict[platform]
+        case "ali" | "openstackbaremetal" | "openstack" | "azure" | "gcp" | "aws":
+            return CLOUD_FULLNAME_DICT[platform]
         case _:
             return platform.upper()
 
+
 def _ali_release_note(metadata):
-    published_image_metadata = metadata['published_image_metadata']
-    flavor_name = metadata['s3_key'].split('/')[-1]  # Extract flavor from s3_key
+    published_image_metadata = metadata["published_image_metadata"]
+    flavor_name = metadata["s3_key"].split("/")[-1]  # Extract flavor from s3_key
 
     regions = []
     for pset in published_image_metadata:
         for p in published_image_metadata[pset]:
-            regions.append({
-                'region': p['region_id'],
-                'image_id': p['image_id']
-            })
+            regions.append({"region": p["region_id"], "image_id": p["image_id"]})
 
-    return {
-        'flavor': flavor_name,
-        'regions': regions
-    }
+    return {"flavor": flavor_name, "regions": regions}
 
 
 def _aws_release_note(metadata):
-    published_image_metadata = metadata['published_image_metadata']
-    flavor_name = metadata['s3_key'].split('/')[-1]  # Extract flavor from s3_key
+    published_image_metadata = metadata["published_image_metadata"]
+    flavor_name = metadata["s3_key"].split("/")[-1]  # Extract flavor from s3_key
 
     regions = []
     for pset in published_image_metadata:
         for p in published_image_metadata[pset]:
-            regions.append({
-                'region': p['aws_region_id'],
-                'image_id': p['ami_id']
-            })
+            regions.append({"region": p["aws_region_id"], "image_id": p["ami_id"]})
 
-    return {
-        'flavor': flavor_name,
-        'regions': regions
-    }
+    return {"flavor": flavor_name, "regions": regions}
 
 
 def _gcp_release_note(metadata):
-    published_image_metadata = metadata['published_image_metadata']
-    flavor_name = metadata['s3_key'].split('/')[-1]  # Extract flavor from s3_key
+    published_image_metadata = metadata["published_image_metadata"]
+    flavor_name = metadata["s3_key"].split("/")[-1]  # Extract flavor from s3_key
 
     details = {}
-    if 'gcp_image_name' in published_image_metadata:
-        details['image_name'] = published_image_metadata['gcp_image_name']
-    if 'gcp_project_name' in published_image_metadata:
-        details['project'] = published_image_metadata['gcp_project_name']
-    details['availability'] = "Global (all regions)"
+    if "gcp_image_name" in published_image_metadata:
+        details["image_name"] = published_image_metadata["gcp_image_name"]
+    if "gcp_project_name" in published_image_metadata:
+        details["project"] = published_image_metadata["gcp_project_name"]
+    details["availability"] = "Global (all regions)"
 
-    return {
-        'flavor': flavor_name,
-        'details': details
-    }
+    return {"flavor": flavor_name, "details": details}
 
 
 def _openstack_release_note(metadata):
-    published_image_metadata = metadata['published_image_metadata']
-    flavor_name = metadata['s3_key'].split('/')[-1]  # Extract flavor from s3_key
+    published_image_metadata = metadata["published_image_metadata"]
+    flavor_name = metadata["s3_key"].split("/")[-1]  # Extract flavor from s3_key
 
     regions = []
-    if 'published_openstack_images' in published_image_metadata:
-        for image in published_image_metadata['published_openstack_images']:
-            regions.append({
-                'region': image['region_name'],
-                'image_id': image['image_id'],
-                'image_name': image['image_name']
-            })
-
-    return {
-        'flavor': flavor_name,
-        'regions': regions
+    if "published_openstack_images" in published_image_metadata:
+        for image in published_image_metadata["published_openstack_images"]:
+            regions.append(
+                {
+                    "region": image["region_name"],
+                    "image_id": image["image_id"],
+                    "image_name": image["image_name"],
     }
+            )
+
+    return {"flavor": flavor_name, "regions": regions}
 
 
 def _openstackbaremetal_release_note(metadata):
-    published_image_metadata = metadata['published_image_metadata']
-    flavor_name = metadata['s3_key'].split('/')[-1]  # Extract flavor from s3_key
+    published_image_metadata = metadata["published_image_metadata"]
+    flavor_name = metadata["s3_key"].split("/")[-1]  # Extract flavor from s3_key
 
     regions = []
-    if 'published_openstack_images' in published_image_metadata:
-        for image in published_image_metadata['published_openstack_images']:
-            regions.append({
-                'region': image['region_name'],
-                'image_id': image['image_id'],
-                'image_name': image['image_name']
-            })
-
-    return {
-        'flavor': flavor_name,
-        'regions': regions
+    if "published_openstack_images" in published_image_metadata:
+        for image in published_image_metadata["published_openstack_images"]:
+            regions.append(
+                {
+                    "region": image["region_name"],
+                    "image_id": image["image_id"],
+                    "image_name": image["image_name"],
     }
+            )
+
+    return {"flavor": flavor_name, "regions": regions}
 
 
 def _azure_release_note(metadata):
-    published_image_metadata = metadata['published_image_metadata']
-    flavor_name = metadata['s3_key'].split('/')[-1]  # Extract flavor from s3_key
+    published_image_metadata = metadata["published_image_metadata"]
+    flavor_name = metadata["s3_key"].split("/")[-1]  # Extract flavor from s3_key
 
     gallery_images = []
     marketplace_images = []
 
     for pset in published_image_metadata:
-        if pset == 'published_gallery_images':
+        if pset == "published_gallery_images":
             for gallery_image in published_image_metadata[pset]:
-                gallery_images.append({
-                    'hyper_v_generation': gallery_image['hyper_v_generation'],
-                    'azure_cloud': gallery_image['azure_cloud'],
-                    'image_id': gallery_image['community_gallery_image_id']
-                })
+                gallery_images.append(
+                    {
+                        "hyper_v_generation": gallery_image["hyper_v_generation"],
+                        "azure_cloud": gallery_image["azure_cloud"],
+                        "image_id": gallery_image["community_gallery_image_id"],
+                    }
+                )
 
-        if pset == 'published_marketplace_images':
+        if pset == "published_marketplace_images":
             for market_image in published_image_metadata[pset]:
-                marketplace_images.append({
-                    'hyper_v_generation': market_image['hyper_v_generation'],
-                    'urn': market_image['urn']
-                })
+                marketplace_images.append(
+                    {
+                        "hyper_v_generation": market_image["hyper_v_generation"],
+                        "urn": market_image["urn"],
+                    }
+                )
 
     return {
-        'flavor': flavor_name,
-        'gallery_images': gallery_images,
-        'marketplace_images': marketplace_images
+        "flavor": flavor_name,
+        "gallery_images": gallery_images,
+        "marketplace_images": marketplace_images,
     }
 
+
 def generate_release_note_image_ids(metadata_files):
     """
     Groups metadata files by image variant, then platform, then architecture
@@ -256,16 +242,16 @@
         with open(metadata_file_path) as f:
             metadata = yaml.load(f, Loader=SafeLoader)
 
-        published_image_metadata = metadata['published_image_metadata']
+        published_image_metadata = metadata["published_image_metadata"]
         # Skip if no publishing metadata found
         if published_image_metadata is None:
             continue
 
-        platform = metadata['platform']
-        arch = metadata['architecture']
+        platform = metadata["platform"]
+        arch = metadata["architecture"]
 
         # Determine variant from flavor name
-        flavor_name = metadata['s3_key'].split('/')[-1]
+        flavor_name = metadata["s3_key"].split("/")[-1]
         variant = get_variant_from_flavor(flavor_name)
 
         if variant not in grouped_data:
@@ -292,6 +278,7 @@
 
     return output
 
+
 def generate_table_format(grouped_data):
     """
     Generate the table format with collapsible region details
@@ -318,7 +305,7 @@
                     summary_text = generate_summary_text(data, platform)
 
                     # Generate download links
-                    download_links = generate_download_links(data['flavor'], platform)
+                    download_links = generate_download_links(data["flavor"], platform)
 
                     # Use shorter names for table display
                     variant_display = VARIANT_TABLE_NAMES[variant]
@@ -326,6 +313,7 @@
 
     return output
 
+
 def generate_region_details(data, platform):
     """
     Generate the detailed region information for the collapsible section
@@ -333,17 +321,24 @@
     details = ""
 
     match data:
-        case {'regions': regions}:
+        case {"regions": regions}:
             for region in regions:
                 match region:
-                    case {'region': region_name, 'image_id': image_id, 'image_name': image_name}:
+                    case {
+                        "region": region_name,
+                        "image_id": image_id,
+                        "image_name": image_name,
+                    }:
                         details += f"**{region_name}:** {image_id} ({image_name})<br>"
-                    case {'region': region_name, 'image_id': image_id}:
+                    case {"region": region_name, "image_id": image_id}:
                         details += f"**{region_name}:** {image_id}<br>"
-        case {'details': details_dict}:
+        case {"details": details_dict}:
             for key, value in details_dict.items():
                 details += f"**{key.replace('_', ' ').title()}:** {value}<br>"
-        case {'gallery_images': gallery_images, 'marketplace_images': marketplace_images}:
+        case {
+            "gallery_images": gallery_images,
+            "marketplace_images": marketplace_images,
+        }:
             if gallery_images:
                 details += "**Gallery Images:**<br>"
                 for img in gallery_images:
@@ -352,40 +347,45 @@
                 details += "**Marketplace Images:**<br>"
                 for img in marketplace_images:
                     details += f"• {img['hyper_v_generation']}: {img['urn']}<br>"
-        case {'gallery_images': gallery_images}:
+        case {"gallery_images": gallery_images}:
             details += "**Gallery Images:**<br>"
             for img in gallery_images:
                 details += f"• {img['hyper_v_generation']} ({img['azure_cloud']}): {img['image_id']}<br>"
-        case {'marketplace_images': marketplace_images}:
+        case {"marketplace_images": marketplace_images}:
             details += "**Marketplace Images:**<br>"
             for img in marketplace_images:
                 details += f"• {img['hyper_v_generation']}: {img['urn']}<br>"
 
     return details
 
+
 def generate_summary_text(data, platform):
     """
     Generate the summary text for the collapsible section
     """
     match data:
-        case {'regions': regions}:
+        case {"regions": regions}:
             count = len(regions)
             return f"{count} regions"
-        case {'details': _}:
+        case {"details": _}:
             return "Global availability"
-        case {'gallery_images': gallery_images, 'marketplace_images': marketplace_images}:
+        case {
+            "gallery_images": gallery_images,
+            "marketplace_images": marketplace_images,
+        }:
             gallery_count = len(gallery_images)
             marketplace_count = len(marketplace_images)
             return f"{gallery_count} gallery + {marketplace_count} marketplace images"
-        case {'gallery_images': gallery_images}:
+        case {"gallery_images": gallery_images}:
             gallery_count = len(gallery_images)
             return f"{gallery_count} gallery images"
-        case {'marketplace_images': marketplace_images}:
+        case {"marketplace_images": marketplace_images}:
             marketplace_count = len(marketplace_images)
             return f"{marketplace_count} marketplace images"
         case _:
             return "Details available"
 
+
 def generate_download_links(flavor, platform):
     """
     Generate download links for the flavor with correct file extension based on platform
@@ -396,6 +396,7 @@
     download_url = f"{base_url}/{flavor}/{filename}"
     return f"[{filename}]({download_url})"
 
+
 def generate_detailed_format(grouped_data):
     """
     Generate the old detailed format with YAML
@@ -406,11 +407,13 @@
         if variant not in grouped_data:
             continue
 
-        output += f"<details>\n<summary>Variant - {VARIANT_NAMES[variant]}</summary>\n\n"
+        output += (
+            f"<details>\n<summary>Variant - {VARIANT_NAMES[variant]}</summary>\n\n"
+        )
         output += f"### Variant - {VARIANT_NAMES[variant]}\n\n"
 
         for platform in sorted(grouped_data[variant].keys()):
-            platform_long_name = cloud_fullname_dict.get(platform, platform)
+            platform_long_name = CLOUD_FULLNAME_DICT.get(platform, platform)
             output += f"<details>\n<summary>{platform.upper()} - {platform_long_name}</summary>\n\n"
             output += f"#### {platform.upper()} - {platform_long_name}\n\n"
 
@@ -435,34 +438,34 @@
                     download_url = f"https://gardenlinux-github-releases.s3.amazonaws.com/objects/{data['flavor']}/{filename}"
                     output += f"  download_url: {download_url}\n"
 
-                    if 'regions' in data:
+                    if "regions" in data:
                         output += "  regions:\n"
-                        for region in data['regions']:
-                            if 'image_name' in region:
+                        for region in data["regions"]:
+                            if "image_name" in region:
                                 output += f"    - region: {region['region']}\n"
                                 output += f"      image_id: {region['image_id']}\n"
                                 output += f"      image_name: {region['image_name']}\n"
                             else:
                                 output += f"    - region: {region['region']}\n"
                                 output += f"      image_id: {region['image_id']}\n"
-                    elif 'details' in data and platform != 'gcp':
+                    elif "details" in data and platform != "gcp":
                         output += "  details:\n"
-                        for key, value in data['details'].items():
+                        for key, value in data["details"].items():
                             output += f"    {key}: {value}\n"
-                    elif platform == 'gcp' and 'details' in data:
+                    elif platform == "gcp" and "details" in data:
                         # For GCP, move details up to same level as flavor
-                        for key, value in data['details'].items():
+                        for key, value in data["details"].items():
                             output += f"  {key}: {value}\n"
-                    elif 'gallery_images' in data or 'marketplace_images' in data:
-                        if data.get('gallery_images'):
+                    elif "gallery_images" in data or "marketplace_images" in data:
+                        if data.get("gallery_images"):
                             output += "  gallery_images:\n"
-                            for img in data['gallery_images']:
+                            for img in data["gallery_images"]:
                                 output += f"    - hyper_v_generation: {img['hyper_v_generation']}\n"
                                 output += f"      azure_cloud: {img['azure_cloud']}\n"
                                 output += f"      image_id: {img['image_id']}\n"
-                        if data.get('marketplace_images'):
+                        if data.get("marketplace_images"):
                             output += "  marketplace_images:\n"
-                            for img in data['marketplace_images']:
+                            for img in data["marketplace_images"]:
                                 output += f"    - hyper_v_generation: {img['hyper_v_generation']}\n"
                                 output += f"      urn: {img['urn']}\n"
 
@@ -475,13 +478,23 @@
 
     return output
 
-def download_metadata_file(s3_artifacts, cname, artifacts_dir):
+
+def download_metadata_file(
+    s3_artifacts, cname, version, commitish_short, artifacts_dir
+):
     """
     Download metadata file (s3_metadata.yaml)
     """
+    LOGGER.debug(
+        f"{s3_artifacts=} | {cname=} | {version=} | {commitish_short=} | {artifacts_dir=}"
+    )
     release_object = list(
-        s3_artifacts._bucket.objects.filter(Prefix=f"meta/singles/{cname}")
+        s3_artifacts._bucket.objects.filter(
+            Prefix=f"meta/singles/{cname}-{version}-{commitish_short}"
+        )
     )[0]
+    LOGGER.debug(f"{release_object.bucket_name=} | {release_object.key=}")
+
     s3_artifacts._bucket.download_file(
         release_object.key, artifacts_dir.joinpath(f"{cname}.s3_metadata.yaml")
     )
@@ -490,7 +503,7 @@
 def download_all_metadata_files(version, commitish):
     repo = Repo(".")
     commit = repo.commit(commitish)
-    flavors_data = commit.tree["flavors.yaml"].data_stream.read().decode('utf-8')
+    flavors_data = commit.tree["flavors.yaml"].data_stream.read().decode("utf-8")
     flavors = FlavorsParser(flavors_data).filter(only_publish=True)
 
     local_dest_path = Path("s3_downloads")
@@ -500,12 +513,15 @@
 
     s3_artifacts = S3Artifacts(GARDENLINUX_GITHUB_RELEASE_BUCKET_NAME)
 
+    commitish_short = commitish[:8]
+
     for flavor in flavors:
-        cname = CName(flavor[1], flavor[0], "{0}-{1}".format(version, commitish))
+        cname = CName(flavor[1], flavor[0], "{0}-{1}".format(version, commitish_short))
+        LOGGER.debug(f"{flavor=} {version=} {commitish=}")
         # Filter by image variants - only download if the flavor matches one of the variants
         flavor_matches_variant = False
         for variant_suffix in IMAGE_VARIANTS:
-            if variant_suffix == '':
+            if variant_suffix == "":
                 last_part = cname.cname.split("-")[-1]
                 if "_" not in last_part:
                     flavor_matches_variant = True
@@ -516,19 +532,20 @@
                 break
 
         if not flavor_matches_variant:
-            print(f"INFO: Skipping flavor {cname.cname} - not matching image variants filter")
+            LOGGER.info(
+                f"Skipping flavor {cname.cname} - not matching image variants filter"
+            )
             continue
 
         try:
-            download_metadata_file(s3_artifacts, cname.cname, local_dest_path)
+            download_metadata_file(
+                s3_artifacts, cname.cname, version, commitish_short, local_dest_path
+            )
         except IndexError:
-            print(f"WARNING: No artifacts found for flavor {cname.cname}, skipping...")
+            LOGGER.warn(f"No artifacts found for flavor {cname.cname}, skipping...")
             continue
 
-    return [ str(artifact) for artifact in local_dest_path.iterdir() ]
-
-
-
+    return [str(artifact) for artifact in local_dest_path.iterdir()]
 
 
 def _parse_match_section(pkg_list: list):
@@ -539,10 +556,11 @@
             pkg_string = next(iter(pkg))
             output += f"\n{pkg_string}:\n"
             for item in pkg[pkg_string]:
-                for k,v in item.items():
+                for k, v in item.items():
                     output += f"  * {k}: {v}\n"
     return output
 
+
 def release_notes_changes_section(gardenlinux_version):
     """
         Get list of fixed CVEs, grouped by upgraded package.
@@ -551,7 +569,7 @@
     """
     try:
         url = f"https://glvd.ingress.glvd.gardnlinux.shoot.canary.k8s-hana.ondemand.com/v1/patchReleaseNotes/{gardenlinux_version}"
-        response = requests.get(url)
+        response = requests.get(url, timeout=REQUESTS_TIMEOUTS)
         response.raise_for_status()  # Will raise an error for bad responses
         data = response.json()
 
@@ -560,7 +578,7 @@
 
         output = [
             "## Changes",
-            "The following packages have been upgraded, to address the mentioned CVEs:"
+            "The following packages have been upgraded, to address the mentioned CVEs:",
         ]
         for package in data["packageList"]:
             upgrade_line = (
@@ -571,35 +589,41 @@
 
             if package["fixedCves"]:
                 for fixedCve in package["fixedCves"]:
-                    output.append(f'  - {fixedCve}')
+                    output.append(f"  - {fixedCve}")
 
         return "\n".join(output) + "\n\n"
     except:
         # There are expected error cases, for example with versions not supported by glvd (1443.x) or when the api is not available
         # Fail gracefully by adding the placeholder we previously used, so that the release note generation does not fail.
-        return textwrap.dedent("""
+        return textwrap.dedent(
+            """
         ## Changes
         The following packages have been upgraded, to address the mentioned CVEs:
         **todo release facilitator: fill this in**
-        """)
+        """
+        )
+
 
 def release_notes_software_components_section(package_list):
     output = "## Software Component Versions\n"
     output += "```"
     output += "\n"
-    packages_regex = re.compile(r'^linux-image-amd64$|^systemd$|^containerd$|^runc$|^curl$|^openssl$|^openssh-server$|^libc-bin$')
+    packages_regex = re.compile(
+        r"^linux-image-amd64$|^systemd$|^containerd$|^runc$|^curl$|^openssl$|^openssh-server$|^libc-bin$"
+    )
     for entry in package_list.values():
         if packages_regex.match(entry.deb_source):
-            output += f'{entry!r}\n'
+            output += f"{entry!r}\n"
     output += "```"
     output += "\n\n"
     return output
 
+
 def release_notes_compare_package_versions_section(gardenlinux_version, package_list):
     output = ""
-    version_components = gardenlinux_version.split('.')
+    version_components = gardenlinux_version.split(".")
     # Assumes we always have version numbers like 1443.2
-    if (len(version_components) == 2):
+    if len(version_components) == 2:
         try:
             major = int(version_components[0])
             patch = int(version_components[1])
@@ -607,37 +631,57 @@
             if patch > 0:
                 previous_version = f"{major}.{patch - 1}"
 
-                output += f"## Changes in Package Versions Compared to {previous_version}\n"
-                output += "```diff\n"
-                output += subprocess.check_output(['/usr/bin/env', 'bash','./hack/compare-apt-repo-versions.sh', previous_version, gardenlinux_version]).decode("utf-8")
-                output += "```\n\n"
+                output += (
+                    f"## Changes in Package Versions Compared to {previous_version}\n"
+                )
+                output += compare_apt_repo_versions(
+                    previous_version, gardenlinux_version
+                )
             elif patch == 0:
                 output += f"## Full List of Packages in Garden Linux version {major}\n"
                 output += "<details><summary>Expand to see full list</summary>\n"
                 output += "<pre>"
                 output += "\n"
                 for entry in package_list.values():
-                    output += f'{entry!r}\n'
+                    output += f"{entry!r}\n"
                 output += "</pre>"
                 output += "\n</details>\n\n"
 
         except ValueError:
-            print(f"Could not parse {gardenlinux_version} as the Garden Linux version, skipping version compare section")
+            LOGGER.error(
+                f"Could not parse {gardenlinux_version} as the Garden Linux version, skipping version compare section"
+            )
     else:
-        print(f"Unexpected version number format {gardenlinux_version}, expected format (major is int).(patch is int)")
+        LOGGER.error(
+            f"Unexpected version number format {gardenlinux_version}, expected format (major is int).(patch is int)"
+        )
+    return output
+
+
+def compare_apt_repo_versions(previous_version, current_version):
+    previous_repo = GardenLinuxRepo(previous_version)
+    current_repo = GardenLinuxRepo(current_version)
+    pkg_diffs = sorted(compare_repo(previous_repo, current_repo), key=lambda t: t[0])
+
+    output = f"| Package | {previous_version} | {current_version} |\n"
+    output += "|---------|--------------------|-------------------|\n"
+
+    for pkg in pkg_diffs:
+        output += f"|{pkg[0]} | {pkg[1] if pkg[1] is not None else '-'} | {pkg[2] if pkg[2] is not None else '-'} |\n"
     return output
 
 
 def _get_package_list(gardenlinux_version):
-    (path, headers) = urllib.request.urlretrieve(f'https://packages.gardenlinux.io/gardenlinux/dists/{gardenlinux_version}/main/binary-amd64/Packages.gz')
-    with gzip.open(path, 'rt') as f:
+    (path, headers) = urllib.request.urlretrieve(
+        f"https://packages.gardenlinux.io/gardenlinux/dists/{gardenlinux_version}/main/binary-amd64/Packages.gz"
+    )
+    with gzip.open(path, "rt") as f:
         d = DebsrcFile()
         d.read(f)
         return d
 
-def create_github_release_notes(gardenlinux_version, commitish):
-    commitish_short=commitish[:8]
 
+def create_github_release_notes(gardenlinux_version, commitish):
     package_list = _get_package_list(gardenlinux_version)
 
     output = ""
@@ -646,16 +690,15 @@
 
     output += release_notes_software_components_section(package_list)
 
-    output += release_notes_compare_package_versions_section(gardenlinux_version, package_list)
+    output += release_notes_compare_package_versions_section(
+        gardenlinux_version, package_list
+    )
 
-    metadata_files = download_all_metadata_files(gardenlinux_version, commitish_short)
+    metadata_files = download_all_metadata_files(gardenlinux_version, commitish)
 
     output += generate_release_note_image_ids(metadata_files)
 
     output += "\n"
-    output += "## Kernel Package direct download links\n"
-    output += get_kernel_urls(gardenlinux_version)
-    output += "\n"
     output += "## Kernel Module Build Container (kmodbuild) "
     output += "\n"
     output += "```"
@@ -666,84 +709,128 @@
     output += "\n"
     return output
 
+
 def write_to_release_id_file(release_id):
     try:
-        with open('.github_release_id', 'w') as file:
+        with open(".github_release_id", "w") as file:
             file.write(release_id)
-        print(f"Created .github_release_id successfully.")
+        LOGGER.info("Created .github_release_id successfully.")
     except IOError as e:
-        print(f"Could not create .github_release_id file: {e}")
+        LOGGER.error(f"Could not create .github_release_id file: {e}")
         sys.exit(1)
 
+
 def create_github_release(owner, repo, tag, commitish, body):
 
-    token = os.environ.get('GITHUB_TOKEN')
+    token = os.environ.get("GITHUB_TOKEN")
     if not token:
         raise ValueError("GITHUB_TOKEN environment variable not set")
 
     headers = {
-        'Authorization': f'token {token}',
-        'Accept': 'application/vnd.github.v3+json'
+        "Authorization": f"token {token}",
+        "Accept": "application/vnd.github.v3+json",
     }
 
     data = {
-        'tag_name': tag,
-        'target_commitish': commitish,
-        'name': tag,
-        'body': body,
-        'draft': False,
-        'prerelease': False
+        "tag_name": tag,
+        "target_commitish": commitish,
+        "name": tag,
+        "body": body,
+        "draft": False,
+        "prerelease": False,
     }
 
-    response = requests.post(f'https://api.github.com/repos/{owner}/{repo}/releases', headers=headers, data=json.dumps(data))
+    response = requests.post(
+        f"https://api.github.com/repos/{owner}/{repo}/releases",
+        headers=headers,
+        data=json.dumps(data),
+        timeout=REQUESTS_TIMEOUTS
+    )
 
     if response.status_code == 201:
-        print("Release created successfully")
+        LOGGER.info("Release created successfully")
         response_json = response.json()
-        return response_json.get('id')
+        return response_json.get("id")
+    else:
+        LOGGER.error("Failed to create release")
+        LOGGER.debug(response.json())
+        response.raise_for_status()
+
+
+def upload_to_github_release_page(
+    github_owner, github_repo, gardenlinux_release_id, file_to_upload, dry_run
+):
+    if dry_run:
+        LOGGER.info(
+            f"Dry run: would upload {file_to_upload} to release {gardenlinux_release_id} in repo {github_owner}/{github_repo}"
+        )
+        return
+
+    token = os.environ.get("GITHUB_TOKEN")
+    if not token:
+        raise ValueError("GITHUB_TOKEN environment variable not set")
+
+    headers = {
+        "Authorization": f"token {token}",
+        "Content-Type": "application/octet-stream",
+    }
+
+    upload_url = f"https://uploads.github.com/repos/{github_owner}/{github_repo}/releases/{gardenlinux_release_id}/assets?name={os.path.basename(file_to_upload)}"
+
+    try:
+        with open(file_to_upload, "rb") as f:
+            file_contents = f.read()
+    except IOError as e:
+        LOGGER.error(f"Error reading file {file_to_upload}: {e}")
+        return
+
+    response = requests.post(upload_url, headers=headers, data=file_contents, timeout=REQUESTS_TIMEOUTS)
+    if response.status_code == 201:
+        LOGGER.info("Upload successful")
     else:
-        print("Failed to create release")
-        print(response.json())
+        LOGGER.error(
+            f"Upload failed with status code {response.status_code}: {response.text}"
+        )
         response.raise_for_status()
 
+
 def main():
     parser = argparse.ArgumentParser(description="GitHub Release Script")
-    subparsers = parser.add_subparsers(dest='command')
+    subparsers = parser.add_subparsers(dest="command")
 
-    create_parser = subparsers.add_parser('create')
-    create_parser.add_argument('--owner', default="gardenlinux")
-    create_parser.add_argument('--repo', default="gardenlinux")
-    create_parser.add_argument('--tag', required=True)
-    create_parser.add_argument('--commit', required=True)
-    create_parser.add_argument('--dry-run', action='store_true', default=False)
-
-    upload_parser = subparsers.add_parser('upload')
-    upload_parser.add_argument('--release_id', required=True)
-    upload_parser.add_argument('--file_path', required=True)
+    create_parser = subparsers.add_parser("create")
+    create_parser.add_argument("--owner", default="gardenlinux")
+    create_parser.add_argument("--repo", default="gardenlinux")
+    create_parser.add_argument("--tag", required=True)
+    create_parser.add_argument("--commit", required=True)
+    create_parser.add_argument("--dry-run", action="store_true", default=False)
+
+    upload_parser = subparsers.add_parser("upload")
+    upload_parser.add_argument("--owner", default="gardenlinux")
+    upload_parser.add_argument("--repo", default="gardenlinux")
+    upload_parser.add_argument("--release_id", required=True)
+    upload_parser.add_argument("--file_path", required=True)
+    upload_parser.add_argument("--dry-run", action="store_true", default=False)
 
-    kernelurl_parser = subparsers.add_parser('kernelurls')
-    kernelurl_parser.add_argument('--version', required=True)
     args = parser.parse_args()
 
-    if args.command == 'create':
+    if args.command == "create":
         body = create_github_release_notes(args.tag, args.commit)
-        if not args.dry_run:
-            release_id = create_github_release(args.owner, args.repo, args.tag, args.commit, body)
-            write_to_release_id_file(f"{release_id}")
-            print(f"Release created with ID: {release_id}")
-        else:
+        if args.dry_run:
             print(body)
-    elif args.command == 'upload':
-        # Implementation for 'upload' command
-        pass
-    elif args.command == 'kernelurls':
-        # Implementation for 'upload' command
-        output =""
-        output += "## Kernel Package direct download links\n"
-        output += get_kernel_urls(args.version)
-        print(output)
+        else:
+            release_id = create_github_release(
+                args.owner, args.repo, args.tag, args.commit, body
+            )
+            write_to_release_id_file(f"{release_id}")
+            LOGGER.info(f"Release created with ID: {release_id}")
+    elif args.command == "upload":
+        upload_to_github_release_page(
+            args.owner, args.repo, args.release_id, args.file_path, args.dry_run
+        )
     else:
         parser.print_help()
 
+
 if __name__ == "__main__":
     main()

Release note:

- github module added with routines for Github release page creation, replaces 
  - .github/workflows/get_kernelurls.py, 
  - .github/workflows/release-page.sh, 
  - .github/workflows/release_note.py from gardenlinux/gardenlinux repo
- fixed bug in release page creation code (filtering of S3 bucket files)
- implemented table formatting for package versions on the release page (replaces diff formatting)
- improved configuration for code linters 
- poetry-managed project dependencies updated
- tests added (~90% code coverage for the new code)

Copy link

codecov bot commented Sep 18, 2025

Codecov Report

❌ Patch coverage is 89.83740% with 50 lines in your changes missing coverage. Please review.
✅ Project coverage is 92.43%. Comparing base (8158f4a) to head (12df7e8).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/gardenlinux/github/__main__.py 89.75% 50 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #179      +/-   ##
==========================================
- Coverage   92.95%   92.43%   -0.53%     
==========================================
  Files          28       30       +2     
  Lines        1306     1797     +491     
==========================================
+ Hits         1214     1661     +447     
- Misses         92      136      +44     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

description: "Generated GitHub workflow flavors matrix"
version:
description: GardenLinux Python library version
default: "0.10.0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the version here. Don't see an benefit with parametrizing it here.

pyproject.toml Outdated
gl-flavors-parse = "gardenlinux.flavors.__main__:main"
gl-oci = "gardenlinux.oci.__main__:main"
gl-s3 = "gardenlinux.s3.__main__:main"
gl-gh = "gardenlinux.github.__main__:main"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last review request is still valid.

@@ -0,0 +1,3 @@
from .__main__ import create_github_release_notes

__all__ = ["create_github_release_notes"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO there's no reason to export this function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion calling gardenlinux.github.create_github_release_notes looks much better than gardenlinux.github.main.create_github_release_notes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I do agree in general you won't call create_github_release_notes() from any other source than the main entrypoint.


s3_artifacts._bucket.download_file(
release_object.key, artifacts_dir.joinpath(f"{cname}.s3_metadata.yaml")
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still would like to see s3_artifacts.bucket here and everywhere else where used. Reasoning given last time.

def download_all_metadata_files(version, commitish):
repo = Repo(".")
commit = repo.commit(commitish)
flavors_data = commit.tree["flavors.yaml"].data_stream.read().decode("utf-8")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not sure if flavors.yaml should be "checked" out here.

As you are touching a local Git repository here I'm not sure if it is useful to reset the checked out commit each time this function is called. You may just access "whatever" flavors.yaml is located here instead. Please be aware that FlavorsParser will use the features directory that is located here anyway without ensuring that the commitish matches.

commitish_short = commitish[:8]

for flavor in flavors:
cname = CName(flavor[1], flavor[0], "{0}-{1}".format(version, commitish_short))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same request as last time:

You may use the commit_id parameter recently added here.

def release_notes_compare_package_versions_section(gardenlinux_version, package_list):
output = ""
version_components = gardenlinux_version.split(".")
# Assumes we always have version numbers like 1443.2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please be aware we'll introduce semantic versioning soon. This logic should use a semver Python package if suitable and consider the patch version to be zero if only one dot is found in the string.

@fwilhe
Copy link
Member

fwilhe commented Sep 22, 2025

@vivus-ignis please cherry-pick the changes from gardenlinux/gardenlinux#3485

@vivus-ignis
Copy link
Contributor Author

@vivus-ignis please cherry-pick the changes from gardenlinux/gardenlinux#3485

Done.

@Akendo
Copy link

Akendo commented Sep 22, 2025

Hate to be the party pooper here, but this PR is far too big. There are many different type changes that this PR contains that can be split up easier. I get that this is a refactor, but we can split it into more mindful pieces.

Please split this PR into several small PRs. For instance, test data as a standalone PR. Then there is
Code that tidies up, things like changing quotation marks, can be a separate PR. Updating the .gitignore that can be a separate PR.

We change the exports from the apt module; this can be in a separate PR.

Why do we add a binary file to this PR? Can we skip this, or can we generate it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants