How to Collect and Upload Log Files with Jamf Pro

In this third instalment we will use ChatGPT to automate this process collecting and uploads from a client directly to the Jamf instance using the API
Table of Contents

In this third instalment of my blog series, I’ll delve into my experience as a Jamf Pro Administrator and how I’ve addressed a common issue: the cumbersome process of collecting logs from a user’s computer. This process often involves delays and multiple contacts before we can start resolving the issue. I’ll walk you through my journey of using ChatGPT to automate this process, ultimately creating a solution that not only collects the necessary logs but also uploads them directly to the Jamf instance.

Lets see how we can use a script in a policy to collect and upload log files with Jamf Pro.

The Challenge: Collecting Logs Efficiently

When a user’s computer encounters problems, we need to collect logs to diagnose and resolve the issue. The typical process involves:

  • The customer contacts us about the issue.
  • We instruct the customer to obtain the logs from the user’s computer.
  • The user collects the logs, compresses them into a zip file, and sends it to us.
  • We finally downloaded the logs and began troubleshooting.

This process is time-consuming and often involves delays due to coordination between the customer and the user.

Automating Log Collection with a Self-Service Button

To streamline this process, you could create a self-service button that users can click to automatically collect the necessary logs, compress them into a zip file, and upload them directly to the Jamf instance. This would allow us to access the logs immediately, speeding up the troubleshooting process. However, this doesn’t help with the delays in getting the log files, as we are still dependent on the user supplying the zip file when raising the case.

Breaking Down the Solution: Using ChatGPT for Script Development

While ChatGPT may not be perfect at generating complete scripts, it excels at breaking down problems and generating solutions step by step. Here’s how I approached this:

Upload Log Files with Jamf Pro

Defining the Requirements for an Automated Log Collection Script

Planning the Parameters

To create a flexible and efficient script for automating log collection and upload to Jamf Pro, I identified several key parameters:

Parameter 4: Jamf URL
  • Primary Method: The script should accept the Jamf URL as an input.
  • Backup Method: If the URL is not provided, the script will read it from the default location using:
				
					defaults read /Library/Preferences/com.jamfsoftware.jamf.plist jss_url
				
			
Parameter 5: API Credentials
  • The script requires a username and password to log into the Jamf server via API.
  • To avoid having clear text credentials in the script, the credentials will be passed as <username>:<password>and converted to base64 using.
				
					echo -n '<username>:<password>' | base64
				
			
Parameter 6: Log Collection Types

I predefined several types of logs to collect, allowing for targeted log gathering:

  • ALL: Runs macOS sysdiagnose.
  • System: Collects /private/var/log/system.log.
  • Install: Collects /private/var/log/install.log.
  • JamfAutoUpdate: Collects /private/var/log/JamfAutoUpdate.log.
  • JamfProtect: Collects /private/var/log/JamfProtect.log.
  • SystemProfiler: Runs the system_profiler command and upload the resulting .spx file.
  • Parameter 7: Splitting Zip Files

This parameter determines the size of chunks for splitting large zip files, which is particularly useful for self-hosted Jamf Pro instances. We will discuss this further later.

Additional Variables

  • Serial Number Variable
    • Purpose: This variable will be used to find the computer record in Jamf and will also be used as the file name for the system profile.
    • Action: The script will obtain the serial number of the computer to identify it and name the files accordingly uniquely.
  • Working Directory
    • Purpose: A temporary directory will be created to store the collected logs and zip files.
    • Action: The script will create this directory at runtime and use it to manage files during the log collection and upload process.
  • Date Variable
    • Purpose: The current date will be used to name the zip files, ensuring each log file has a unique and identifiable name.
    • Action: When creating zip files, the script generates a date string and appends it to the file names.

Automating Log Collection and Upload with Jamf Pro

As the script is designed to streamline the process of collecting log files from a Jamf-managed macOS device, splitting and compressing log files if necessary, and then uploading them to a Jamf Pro server. I started by getting ChatGPT to work on each part of the script.  The resulting script was then joined together and tested.

Here’s how the script works:

Variable Initialisation

Jamf URL and API Credentials:
  • The script takes the Jamf URL and API credentials as input parameters. If the URL is not provided, it reads from the system’s configuration file.
  • The script ensures that the Jamf URL does not end with a trailing slash.
				
					# Variables
jamf_url=$4
apiHash=$5
log_collection_type=$6
chunk_size=$7 # Size of each chunk in MB

# Ensure jamf_url does not end with a trailing slash
jamf_url="${jamf_url%/}"


				
			
Log Collection Type and Chunk Size:
  • A parameter specifies the type of logs to collect, defaulting to “Install” if not provided.
  • The chunk size for splitting large files defaults to 10 MB if not specified and is converted to bytes.
				
					# Default chunk size if not provided
if [[ -z "$chunk_size" ]]; then
    chunk_size=10
fi

# Convert chunk size to bytes
chunk_size_bytes=$((chunk_size * 1024 * 1024))

				
			
System Information:
  • The script retrieves the computer’s serial number, which is used to identify the computer in Jamf Pro and to name the system profile file.
  • A temporary working directory is created to store the collected logs and zip files.
  • A date variable is set to append to the file names for uniqueness.
				
					# Check for Jamf Pro URL in plist if not passed
if [[ -z "$jamf_url" ]]; then
    jamf_url=$(defaults read /Library/Preferences/com.jamfsoftware.jamf.plist jss_url | sed 's/.$//')
    if [[ -z "$jamf_url" ]]; then
        echo "Error: Jamf Pro Server URL not provided and not found in plist."
        exit 1
    fi
else
    # Ensure jamf_url does not end with a trailing slash
    jamf_url="${jamf_url%/}"
fi

# Default log collection type if not passed
if [[ -z "$log_collection_type" ]]; then
    log_collection_type="Install"
fi

# Get the serial number of the computer
serialNumber=$(system_profiler SPHardwareDataType | awk '/Serial Number/{print $4}')

# Create a temporary working directory
working_dir=$(mktemp -d /private/var/tmp/jamf_script.XXXXXX)
convertedfile="$working_dir/sysdiagnose.zip"
date_suffix=$(date +%d-%m-%Y)
				
			

Functions

Collect Logs (collect_logs):

  • This function gathers the specified logs based on the provided type.
  • It handles various types of logs, including system logs, installation logs, Jamf AutoUpdate logs, Jamf Protect logs, and a detailed system profile.
  • For comprehensive diagnostics (log type “ALL”), it runs the sysdiagnose command and prepares the resulting file for processing.
				
					# Function to collect logs based on the type
collect_logs() {
    case "$log_collection_type" in
        "ALL")
            # Run sysdiagnose command
            /bin/rm -rf "$working_dir/sysdiagnose*"
            /usr/bin/sysdiagnose -u
            mv /var/tmp/sysdiagnose*.tar.gz "$convertedfile"
            ;;
        "System")
            zip -rj "$working_dir/system_log_$date_suffix.zip" /private/var/log/system.log
            ;;
        "Install")
            zip -rj "$working_dir/install_log_$date_suffix.zip" /private/var/log/install.log
            ;;
        "JamfAutoUpdate")
            zip -rj "$working_dir/jamfautoupdate_log_$date_suffix.zip" /private/var/log/JamfAutoUpdate.log
            ;;
        "JamfProtect")
            zip -rj "$working_dir/jamfprotect_log_$date_suffix.zip" /private/var/log/JamfProtect.log
            ;;
        "SystemProfiler")
            system_profiler -xml > "$working_dir/${serialNumber}_SystemProfile.spx"
            zip -rj "$working_dir/${serialNumber}_SystemProfile_$date_suffix.zip" "$working_dir/${serialNumber}_SystemProfile.spx"
            ;;
        *)
            echo "Error: Unknown log collection type."
            exit 1
            ;;
    esac
}

				
			

Split and Zip Files (split_and_zip):

  • Due to potential issues with uploading large files to on-prem Jamf Pro installations, this function splits large files into smaller chunks and compresses each chunk separately.
  • This ensures that even large diagnostic files can be uploaded without issues.
				
					# Function to split and zip the files into chunks
split_and_zip() {
    local file_to_split=$1
    local file_size=$(stat -f%z "$file_to_split")
    if [[ "$file_size" -gt "$chunk_size_bytes" ]]; then
        # Split the file into chunks
        split -b "${chunk_size}M" "$file_to_split" "$working_dir/split-sysdiagnose-"

        # Zip each chunk separately
        for chunk in $working_dir/split-sysdiagnose-*; do
            if [[ $chunk != *".zip"* ]]; then
                mv "$chunk" "$chunk.tmp"
                zip -j "$chunk-$date_suffix.zip" "$chunk.tmp"
                /bin/rm "$chunk.tmp"
            fi
        done
        /bin/rm "$file_to_split"
    fi
}
				
			

Bearer Token Management:

  • Get Bearer Token (getBearerToken): This function requests a bearer token from the Jamf Pro server using API credentials, with error handling to ensure the token is retrieved successfully.
  • Check Token Expiration (checkTokenExpiration): This function checks if the bearer token is still valid and requests a new token if necessary.
  • Invalidate Token (invalidateToken): After completing the upload process, this function invalidates the bearer token to enhance security.
				
					# Function to get a Bearer Token with error checking
getBearerToken() {
    response=$(/usr/bin/curl -s --header "authorization: Basic ${apiHash}" "${jamf_url}/api/v1/auth/token" -X POST)
    if [[ $? -ne 0 ]]; then
        echo "Error: Failed to get bearer token. Check your API credentials and URL."
        exit 1
    fi
    bearerToken=$(echo "$response" | plutil -extract token raw -)
    if [[ -z "$bearerToken" ]]; then
        echo "Error: Failed to extract bearer token from response."
        exit 1
    fi
    tokenExpiration=$(echo "$response" | plutil -extract expires raw - | awk -F . '{print $1}')
    tokenExpirationEpoch=$(date -j -f "%Y-%m-%dT%T" "$tokenExpiration" +"%s")
}

# Function to check token expiration
checkTokenExpiration() {
    nowEpochUTC=$(date -j -f "%Y-%m-%dT%T" "$(date -u +"%Y-%m-%dT%T")" +"%s")
    if [[ $tokenExpirationEpoch -gt $nowEpochUTC ]]; then
        echo "Token valid until the following epoch time: $tokenExpirationEpoch"
    else
        echo "No valid token available, getting new token"
        getBearerToken
    fi
}

# Function to invalidate token
invalidateToken() {
    responseCode=$(curl -w "%{http_code}" -H "Authorization: Bearer ${bearerToken}" "$jamf_url/api/v1/auth/invalidate-token" -X POST -s -o /dev/null)
    if [[ ${responseCode} == 204 ]]; then
        echo "Token successfully invalidated"
        bearerToken=""
        tokenExpirationEpoch="0"
    elif [[ ${responseCode} == 401 ]]; then
        echo "Token already invalid"
    else
        echo "An unknown error occurred invalidating the token"
    fi
}

				
			

Main Script Logic

  • Collecting and Splitting Logs:
    • The script starts by executing the collect_logs function to gather the necessary logs.
    • If the log collection type is “ALL,” the script calls the split_and_zip function to handle large files.
				
					# Collect the required logs
collect_logs

# Split and zip the collected logs if sysdiagnose was run
if [[ "$log_collection_type" == "ALL" ]]; then
    split_and_zip "$convertedfile"
fi

echo "Log collection and splitting completed."

				
			

Retrieving the Computer ID:

  • The script retrieves the computer’s unique ID from the Jamf Pro server using the serial number.
  • This step is crucial for associating the collected logs with the correct computer record in Jamf Pro.
				
					# Get Machine ID
checkTokenExpiration
computerID=$(/usr/bin/curl -X GET --header "Accept: text/xml" --header "Authorization: Bearer ${bearerToken}" --url "$jamf_url/JSSResource/computers/serialnumber/$serialNumber" | xmllint --xpath 'computer/general/id/text()' - )
        
if [[ -z "$computerID" ]]; then
    echo "Error: Unable to get computer ID."
    exit 1
fi
echo "Computer ID: $computerID"
				
			

Uploading Log Files:

  • The script uploads each collected and compressed log file to the appropriate computer record in Jamf Pro.
  • It handles any errors that occur during the upload process to ensure all files are uploaded successfully.
				
					# Upload files
for file in $working_dir/*; do
    if [[ -f "$file" ]]; then
        /usr/bin/curl -X POST -s --header "Accept: text/xml" --header "Authorization: Bearer ${bearerToken}" --url "${jamf_url}/JSSResource/fileuploads/computers/id/$computerID" -F name=@"${file}"
        uploadStatus=$?
        if [[ $uploadStatus -ne 0 ]]; then
            echo "Error: Upload failed for $file"
        else
            echo "Upload successful for $file"
        fi
    fi
done
				
			

Final Cleanup and Token Invalidation:

  • The script invalidates the bearer token to prevent unauthorized access.
  • It then deletes the temporary working directory to clean up any residual files.
				
					# Invalidate token after upload
invalidateToken

# Cleanup
/bin/rm -rf "$working_dir"

echo "Script completed successfully."

# Exit the script successfully
exit 0
				
			

Build the policy in Jamf Pro

Now, it’s time to test the script by creating a script and policy to run on a test computer.

Download the Final Script: ChatGPT Script.txt.
Upload the Script to Jamf Server:
  • Go to Settings > Computer Management > Scripts and create a new script.
  • Copy and paste the contents of the downloaded script.
  • Name the script (e.g., “ChatGPT Script”) and assign it to a category (e.g., “Test” or your preferred category).
  • Under options, enter the following Parameter Labels:
  • Parameter 4: Jamf URL
  • Parameter 5: Username and Password Hash
  • Parameter 6: Log Request
  • Parameter 7: File split size (default is 10 MB)
Create an API Account:
  • Go to Settings > User Accounts and Groups and create a standard account.
  • Name the account (e.g., “JamfCloudAPI”) and set a password (e.g., “JamfCloudAPI1234”).
  • Under account privileges, enable:
  • Computers: Create, Read, Update
  • File Attachments: Create, Read, Update
  • Generate the password hash using:
				
					echo -n 'JamfCloudAPI:JamfCloudAPI1234' | base64
				
			

This will produce SmFtZkNsb3VkQVBJOkphbWZDbG91ZEFQSTEyMzQ=, which you’ll need later.

Create a Policy:
  • Go to Computers > Policies and create a new policy.
  • Name the policy (e.g., “Request Log Files – Install”) and set the execution frequency to “Ongoing.”
  • Add the script “ChatGPT Script.”
  • Enter the options:
  • Jamf URL
  • Log Request (e.g., “Install”)
  • Hash (e.g., SmFtZkNsb3VkQVBJOkphbWZDbG91ZEFQSTEyMzQ=)
  • File split size (e.g., 100 MB)
  • Scope the policy to your test computer.
  • Make the policy available in Self Service.

Question and Answer Section

Why do split zip files not expand?

 Jamf Pro does not accept the default naming convention of split zip files.

How do I expand a zip file that has been split?

Download and run the following script. It will expand and rejoin the split zip files so they can be fully extracted.

				
					#!/bin/bash

# Directory containing the split zip files
split_dir=$1

# Ensure the directory exists
if [ ! -d "$split_dir" ]; then
    echo "Error: Directory $split_dir does not exist."
    exit 1
fi

# Change to the directory containing the split zip files
cd "$split_dir"

# Create an array of zip files to expand
zip_files=($(ls *.zip 2>/dev/null))

# Check if there are any zip files
if [ ${#zip_files[@]} -eq 0 ]; then
    echo "Error: No zip files found in $split_dir."
    exit 1
fi

# Expand all zip files in the directory
for zipfile in "${zip_files[@]}"; do
    unzip "$zipfile"
    if [ $? -ne 0 ]; then
        echo "Error: Failed to unzip $zipfile."
        exit 1
    fi
    rm "$zipfile"
done

# Check if there are any tmp files to concatenate
tmp_files=($(ls split-sysdiagnose-* 2>/dev/null))

# Check if there are any split files
if [ ${#tmp_files[@]} -eq 0 ]; then
    echo "Error: No split files found in $split_dir."
    exit 1
fi

# Concatenate the tmp files into a single zip file
cat "${tmp_files[@]}" > combined.zip

# Check if the concatenation was successful
if [ $? -ne 0 ]; then
    echo "Error: Failed to concatenate split files."
    exit 1
fi

# Clean up temporary split files
rm split-sysdiagnose-*

echo "Successfully expanded and unzipped the files."

exit 0
				
			
Important Note:

Do not use the demo account or password in live instances. Follow your company’s naming conventions and password requirements. The demo credentials are for testing purposes only.

Script Issues:

Advanced users may notice some outdated methods and unnecessary steps in the script, such as redundant variable declarations and URL modifications.

Is it possible to use the API Client?

Yes, the script can be modified. Refer to these resources for more information:

What should I do if the script fails to upload files to Jamf Pro?

Ensure the API credentials and Jamf URL are correct. Check network connectivity and Jamf Pro server status. Verify the script’s permissions to access and upload files.

How can I test the script without affecting live systems?

Use a test environment or create a separate test policy and scope it to a test computer. Avoid using production credentials or systems for initial testing.

Can I customise the types of logs collected by the script?

Yes, you can modify the script’s log_collection_type parameter to specify different log types, such as System, Install, JamfAutoUpdate, JamfProtect, and SystemProfiler.

What is the best way to secure the API credentials used in the script?

Avoid hardcoding credentials in the script. Use environment variables or secure storage solutions. Always use base64 encoding for credentials and ensure they are stored and transmitted securely.

How do I troubleshoot if the script does not work as expected?

Check the script logs for error messages. Verify all input parameters and configurations. Test each script component separately to isolate issues. Use debugging tools and logs to identify and resolve problems.

By addressing these common questions and providing clear answers, you can help users effectively use and troubleshoot the log collection and upload script.

3 Comments

  1. Nice article, however when running via Self Service the policy is failing and the lgo is showing

    22: bad math expression: operator expected at `MB’

  2. I fixed the error (using ChatGPT of course)
    it suggested replacing this line:

    split -b “${chunk_size}M” “$file_to_split” “$working_dir/split-sysdiagnose-”

    with
    split -b “${chunk_size_bytes}” “$file_to_split” “$working_dir/split-sysdiagnose-”

    This worked and now the log is uploaded to Jamf.
    Really nice.

    • This may be due to the macOS versions that I am using, which are 14.5 and 14.6. The split command should allow the M.

      -b byte_count[K|k|M|m|G|g]
      Create split files byte_count bytes in length. If k or K is appended to the number, the file is split into byte_count kilobyte pieces. If m or M
      is appended to the number, the file is split into byte_count megabyte pieces. If g or G is appended to the number, the file is split into
      byte_count gigabyte pieces.

      I am working on a new script which will use Unified logging to grab the logs and upload them to the Jamf. I will post a link to it once I have finished it.

Leave a Reply

Your email address will not be published. Required fields are marked *

This Website Is Using Cookies

We use cookies to give the best browsing experience. If you continue to use our website, we will assume that you are happy to receive all the cookies for this website