」工欲善其事,必先利其器。「—孔子《論語.錄靈公》
首頁 > 程式設計 > 用於 DevOps 的 Python

用於 DevOps 的 Python

發佈於2024-08-18
瀏覽:157

Python for devops

Here are some important Python modules used for DevOps automation:

os module: The os module provides a way to interact with the operating system, including file operations, process management, and system information.

Requests and urllib3 modules: The Requests and urllib3 modules are used to send HTTP requests and handle HTTP responses.

Logging module: The logging module provides a way to log messages from Python applications.

boto3 module: The boto3 module provides an interface to the Amazon Web Services (AWS) SDK for Python.

paramiko module : The paramiko module is a Python implementation of SSH protocol, used for secure remote connections.

JSON module : The JSON module is used to encode and decode JSON data.

PyYAML module : The PyYAML module provides a way to parse and generate YAML data.

pandas module: The pandas module provides data analysis tools, including data manipulation and data visualization.

smtplib module: The smtplib module provides a way to send email messages from Python applications.

Python Use Cases in DevOps

1.Automation of Infrastructure Provisioning

  • Tooling: AWS Boto3, Azure SDK, Terraform, Ansible
  • Example: Automating the creation and management of cloud resources such as EC2 instances, S3 buckets, and RDS databases. Python scripts can use the AWS Boto3 library to manage AWS resources programmatically.

example code:

import boto3

def lambda_handler(event, context):
    ec2 = boto3.client('ec2')

    # Get all EBS snapshots
    response = ec2.describe_snapshots(OwnerIds=['self'])

    # Get all active EC2 instance IDs
    instances_response = ec2.describe_instances(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
    active_instance_ids = set()

    for reservation in instances_response['Reservations']:
        for instance in reservation['Instances']:
            active_instance_ids.add(instance['InstanceId'])

    # Iterate through each snapshot and delete if it's not attached to any volume or the volume is not attached to a running instance
    for snapshot in response['Snapshots']:
        snapshot_id = snapshot['SnapshotId']
        volume_id = snapshot.get('VolumeId')

        if not volume_id:
            # Delete the snapshot if it's not attached to any volume
            ec2.delete_snapshot(SnapshotId=snapshot_id)
            print(f"Deleted EBS snapshot {snapshot_id} as it was not attached to any volume.")
        else:
            # Check if the volume still exists
            try:
                volume_response = ec2.describe_volumes(VolumeIds=[volume_id])
                if not volume_response['Volumes'][0]['Attachments']:
                    ec2.delete_snapshot(SnapshotId=snapshot_id)
                    print(f"Deleted EBS snapshot {snapshot_id} as it was taken from a volume not attached to any running instance.")
            except ec2.exceptions.ClientError as e:
                if e.response['Error']['Code'] == 'InvalidVolume.NotFound':
                    # The volume associated with the snapshot is not found (it might have been deleted)
                    ec2.delete_snapshot(SnapshotId=snapshot_id)
                    print(f"Deleted EBS snapshot {snapshot_id} as its associated volume was not found.")

repo:https://github.com/PRATIKNALAWADE/AWS-Cost-Optimization/blob/main/ebs_snapshots.py

2.Use Case: Automating CI/CD Pipelines with Python

In a CI/CD pipeline, automation is key to ensuring that code changes are built, tested, and deployed consistently and reliably. Python can be used to interact with CI/CD tools like Jenkins, GitLab CI, or CircleCI, either by triggering jobs, handling webhook events, or interacting with various APIs to deploy applications.

Below is an example of how you can use Python to automate certain aspects of a CI/CD pipeline using Jenkins.

Example: Triggering Jenkins Jobs with Python

Scenario:
You have a Python script that needs to trigger a Jenkins job whenever a new commit is pushed to the main branch of a GitHub repository. The script will also pass some parameters to the Jenkins job, such as the Git commit ID and the branch name.

Step 1: Set Up Jenkins Job

First, ensure that you have a Jenkins job configured to accept parameters. You will need the job name, Jenkins URL, and an API token for authentication.

Step 2: Writing the Python Script

Below is a Python script that triggers the Jenkins job with specific parameters:

import requests
import json

# Jenkins server details
jenkins_url = 'http://your-jenkins-server.com'
job_name = 'your-job-name'
username = 'your-username'
api_token = 'your-api-token'

# Parameters to pass to the Jenkins job
branch_name = 'main'
commit_id = 'abc1234def5678'

# Construct the job URL
job_url = f'{jenkins_url}/job/{job_name}/buildWithParameters'

# Define the parameters to pass
params = {
    'BRANCH_NAME': branch_name,
    'COMMIT_ID': commit_id
}

# Trigger the Jenkins job
response = requests.post(job_url, auth=(username, api_token), params=params)

# Check the response
if response.status_code == 201:
    print('Jenkins job triggered successfully.')
else:
    print(f'Failed to trigger Jenkins job: {response.status_code}, {response.text}')

Step 3: Explanation

  • Jenkins Details:

    • jenkins_url: URL of your Jenkins server.
    • job_name: The name of the Jenkins job you want to trigger.
    • username and api_token: Your Jenkins credentials for authentication.
  • Parameters:

    • branch_name and commit_id are examples of parameters that the Jenkins job will use. These could be passed dynamically based on your CI/CD workflow.
  • Requests Library:

    • The script uses Python's requests library to make a POST request to the Jenkins server to trigger the job.
    • auth=(username, api_token) is used to authenticate with the Jenkins API.
  • Response Handling:

    • If the job is triggered successfully, Jenkins responds with a 201 status code, which the script checks to confirm success.

Step 4: Integrate with GitHub Webhooks

To trigger this Python script automatically whenever a new commit is pushed to the main branch, you can configure a GitHub webhook that sends a POST request to your server (where this Python script is running) whenever a push event occurs.

  • GitHub Webhook Configuration:

    1. Go to your GitHub repository settings.
    2. Under "Webhooks," click "Add webhook."
    3. Set the "Payload URL" to the URL of your server that runs the Python script.
    4. Choose application/json as the content type.
    5. Set the events to listen for (e.g., push events).
    6. Save the webhook.
  • Handling the Webhook:

    • You may need to set up a simple HTTP server using Flask, FastAPI, or a similar framework to handle the incoming webhook requests from GitHub and trigger the Jenkins job accordingly.
from flask import Flask, request, jsonify
import requests

app = Flask(__name__)

# Jenkins server details
jenkins_url = 'http://your-jenkins-server.com'
job_name = 'your-job-name'
username = 'your-username'
api_token = 'your-api-token'

@app.route('/webhook', methods=['POST'])
def github_webhook():
    payload = request.json

    # Extract branch name and commit ID from the payload
    branch_name = payload['ref'].split('/')[-1]  # Get the branch name
    commit_id = payload['after']

    # Only trigger the job if it's the main branch
    if branch_name == 'main':
        job_url = f'{jenkins_url}/job/{job_name}/buildWithParameters'
        params = {
            'BRANCH_NAME': branch_name,
            'COMMIT_ID': commit_id
        }

        response = requests.post(job_url, auth=(username, api_token), params=params)

        if response.status_code == 201:
            return jsonify({'message': 'Jenkins job triggered successfully.'}), 201
        else:
            return jsonify({'message': 'Failed to trigger Jenkins job.'}), response.status_code

    return jsonify({'message': 'No action taken.'}), 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Step 5: Deploying the Flask App

Deploy this Flask app on a server and ensure it is accessible via the public internet, so GitHub's webhook can send data to it.

Conclusion

This example illustrates how Python can be integrated into a CI/CD pipeline, interacting with tools like Jenkins to automate essential tasks.

3.Configuration Management and Orchestration

  • Tooling: Ansible, Chef, Puppet
  • Example: Using Python scripts with Ansible to manage the configuration of servers. Scripts can be used to ensure that all servers are configured consistently and to manage complex deployments that require orchestration of multiple services.

In this example, we'll use Python to manage server configurations with Ansible. The script will run Ansible playbooks to ensure servers are configured consistently and orchestrate the deployment of multiple services.

Example: Automating Server Configuration with Ansible and Python

Scenario:
You need to configure a set of servers to ensure they have the latest version of a web application, along with necessary dependencies and configurations. You want to use Ansible for configuration management and Python to trigger and manage Ansible playbooks.

Step 1: Create Ansible Playbooks

playbooks/setup.yml:
This Ansible playbook installs necessary packages and configures the web server.

---
- name: Configure web servers
  hosts: web_servers
  become: yes
  tasks:
    - name: Install nginx
      apt:
        name: nginx
        state: present

    - name: Deploy web application
      copy:
        src: /path/to/local/webapp
        dest: /var/www/html/webapp
        owner: www-data
        group: www-data
        mode: '0644'

    - name: Ensure nginx is running
      service:
        name: nginx
        state: started
        enabled: yes

inventory/hosts:
Define your servers in the Ansible inventory file.

[web_servers]
server1.example.com
server2.example.com

Step 2: Write the Python Script

The Python script will use the subprocess module to run Ansible commands and manage playbook execution.

import subprocess

def run_ansible_playbook(playbook_path, inventory_path):
    """
    Run an Ansible playbook using the subprocess module.

    :param playbook_path: Path to the Ansible playbook file.
    :param inventory_path: Path to the Ansible inventory file.
    :return: None
    """
    try:
        result = subprocess.run(
            ['ansible-playbook', '-i', inventory_path, playbook_path],
            check=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
            text=True
        )
        print('Ansible playbook executed successfully.')
        print(result.stdout)
    except subprocess.CalledProcessError as e:
        print('Ansible playbook execution failed.')
        print(e.stderr)

if __name__ == '__main__':
    # Paths to the playbook and inventory files
    playbook_path = 'playbooks/setup.yml'
    inventory_path = 'inventory/hosts'

    # Run the Ansible playbook
    run_ansible_playbook(playbook_path, inventory_path)

Step 3: Explanation

  • Ansible Playbook (setup.yml):

    • Tasks: This playbook installs Nginx, deploys the web application, and ensures Nginx is running.
    • Hosts: web_servers is a group defined in the inventory file.
  • Inventory File (hosts):

    • Groups: Defines which servers are part of the web_servers group.
  • Python Script (run_ansible_playbook function):

    • subprocess.run: Executes the ansible-playbook command to apply configurations defined in the playbook.
    • Error Handling: Catches and prints errors if the playbook execution fails.

Step 4: Running the Script

  • Make sure Ansible is installed on the system where the Python script is running.
  • Ensure the ansible-playbook command is accessible in the system PATH.
  • Execute the Python script to apply the Ansible configurations:
python3 your_script_name.py

Step 5: Advanced Use Cases

  • Dynamic Inventory: Use Python to generate dynamic inventory files based on real-time data from a database or an API.
  • Role-based Configurations: Define more complex configurations using Ansible roles and use Python to manage role-based deployments.
  • Notifications and Logging: Extend the Python script to send notifications (e.g., via email or Slack) or log detailed information about the playbook execution.

Conclusion

By integrating Python with Ansible, you can automate server configuration and orchestration tasks efficiently. Python scripts can manage and trigger Ansible playbooks, ensuring that server configurations are consistent and deployments are orchestrated seamlessly.

4 Monitoring and Alerting with Python

In a modern monitoring setup, you often need to collect metrics and logs from various services, analyze them, and push them to monitoring systems like Prometheus or Elasticsearch. Python can be used to gather and process this data, and set up automated alerts based on specific conditions.

Example: Collecting Metrics and Logs, and Setting Up Alerts

1. Collecting Metrics and Logs

Scenario:
You want to collect custom metrics and logs from your application and push them to Prometheus and Elasticsearch. Additionally, you'll set up automated alerts based on specific conditions.

Step 1: Collecting Metrics with Python and Prometheus

To collect and expose custom metrics from your application, you can use the prometheus_client library in Python.

Install prometheus_client:

pip install prometheus_client

Python Script to Expose Metrics (metrics_server.py):

from prometheus_client import start_http_server, Gauge
import random
import time

# Create a metric to track the number of requests
REQUESTS = Gauge('app_requests_total', 'Total number of requests processed by the application')

def process_request():
    """Simulate processing a request."""
    REQUESTS.inc()  # Increment the request count

if __name__ == '__main__':
    # Start up the server to expose metrics
    start_http_server(8000)  # Metrics will be available at http://localhost:8000/metrics

    # Simulate processing requests
    while True:
        process_request()
        time.sleep(random.uniform(0.5, 1.5))  # Simulate random request intervals

Step 2: Collecting Logs with Python and Elasticsearch

To push logs to Elasticsearch, you can use the elasticsearch Python client.

Install elasticsearch:

pip install elasticsearch

Python Script to Send Logs (log_collector.py):

from elasticsearch import Elasticsearch
import logging
import time

# Elasticsearch client setup
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
index_name = 'application-logs'

# Configure Python logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('log_collector')

def log_message(message):
    """Log a message and send it to Elasticsearch."""
    logger.info(message)
    es.index(index=index_name, body={'message': message, 'timestamp': time.time()})

if __name__ == '__main__':
    while True:
        log_message('This is a sample log message.')
        time.sleep(5)  # Log every 5 seconds

Step 3: Setting Up Alerts

To set up alerts, you need to define alerting rules based on the metrics and logs collected. Here’s an example of how you can configure alerts with Prometheus.

Prometheus Alerting Rules (prometheus_rules.yml):

groups:
- name: example_alerts
  rules:
  - alert: HighRequestRate
    expr: rate(app_requests_total[1m]) > 5
    for: 2m
    labels:
      severity: critical
    annotations:
      summary: "High request rate detected"
      description: "Request rate is above 5 requests per minute for the last 2 minutes."

Deploying Alerts:

  1. Update Prometheus Configuration: Ensure that your Prometheus server is configured to load the alerting rules file. Update your prometheus.yml configuration file:
   rule_files:
     - 'prometheus_rules.yml'
  1. Reload Prometheus Configuration: After updating the configuration, reload Prometheus to apply the new rules.
   kill -HUP $(pgrep prometheus)

Grafana Setup:

  1. Add Prometheus as a Data Source:
    Go to Grafana's data source settings and add Prometheus.

  2. Create Dashboards:
    Create dashboards in Grafana to visualize the metrics exposed by your application. You can set up alerts in Grafana as well, based on the metrics from Prometheus.

Elasticsearch Alerting:

  1. Install Elastic Stack Alerting Plugin:
    If you're using Elasticsearch with Kibana, you can use Kibana's alerting features to create alerts based on log data. You can set thresholds and get notifications via email, Slack, or other channels.

  2. Define Alert Conditions:
    Use Kibana to define alert conditions based on your log data indices.

Conclusion

By using Python scripts to collect and process metrics and logs, and integrating them with tools like Prometheus and Elasticsearch, you can create a robust monitoring and alerting system. The examples provided show how to expose custom metrics, push logs, and set up alerts for various conditions. This setup ensures you can proactively monitor your application, respond to issues quickly, and maintain system reliability.

5. Use Case: Scripting for Routine Tasks and Maintenance

Routine maintenance tasks like backups, system updates, and log rotation are essential for keeping your infrastructure healthy. You can automate these tasks using Python scripts and schedule them with cron jobs. Below are examples of Python scripts for common routine maintenance tasks and how to set them up with cron.

Example: Python Scripts for Routine Tasks

1. Backup Script

Scenario:
Create a Python script to back up a directory to a backup location. This script will be scheduled to run daily to ensure that your data is regularly backed up.

Backup Script (backup_script.py):

import shutil
import os
from datetime import datetime

# Define source and backup directories
source_dir = '/path/to/source_directory'
backup_dir = '/path/to/backup_directory'

# Create a timestamped backup file name
timestamp = datetime.now().strftime('%Y%m%d-%H%M%S')
backup_file = f'{backup_dir}/backup_{timestamp}.tar.gz'

def create_backup():
    """Create a backup of the source directory."""
    shutil.make_archive(backup_file.replace('.tar.gz', ''), 'gztar', source_dir)
    print(f'Backup created at {backup_file}')

if __name__ == '__main__':
    create_backup()

2. System Update Script

Scenario:
Create a Python script to update the system packages. This script will ensure that the system is kept up-to-date with the latest security patches and updates.

System Update Script (system_update.py):

import subprocess

def update_system():
    """Update the system packages."""
    try:
        subprocess.run(['sudo', 'apt-get', 'update'], check=True)
        subprocess.run(['sudo', 'apt-get', 'upgrade', '-y'], check=True)
        print('System updated successfully.')
    except subprocess.CalledProcessError as e:
        print(f'Failed to update the system: {e}')

if __name__ == '__main__':
    update_system()

3. Log Rotation Script

Scenario:
Create a Python script to rotate log files, moving old logs to an archive directory and compressing them.

Log Rotation Script (log_rotation.py):

import os
import shutil
from datetime import datetime

# Define log directory and archive directory
log_dir = '/path/to/log_directory'
archive_dir = '/path/to/archive_directory'

def rotate_logs():
    """Rotate log files by moving and compressing them."""
    for log_file in os.listdir(log_dir):
        log_path = os.path.join(log_dir, log_file)
        if os.path.isfile(log_path):
            timestamp = datetime.now().strftime('%Y%m%d-%H%M%S')
            archive_file = f'{archive_dir}/{log_file}_{timestamp}.gz'
            shutil.copy(log_path, archive_file)
            shutil.make_archive(archive_file.replace('.gz', ''), 'gztar', root_dir=archive_dir, base_dir=log_file)
            os.remove(log_path)
            print(f'Log rotated: {archive_file}')

if __name__ == '__main__':
    rotate_logs()

Setting Up Cron Jobs

You need to set up cron jobs to schedule these scripts to run at specific intervals. Use the crontab command to edit the cron schedule.

  1. Open the Crontab File:
   crontab -e
  1. Add Cron Job Entries:
  • Daily Backup at 2 AM:

     0 2 * * * /usr/bin/python3 /path/to/backup_script.py
    
  • Weekly System Update on Sunday at 3 AM:

     0 3 * * 0 /usr/bin/python3 /path/to/system_update.py
    
  • Log Rotation Every Day at Midnight:

     0 0 * * * /usr/bin/python3 /path/to/log_rotation.py
    

Explanation:

  • 0 2 * * *: Runs the script at 2:00 AM every day.
  • 0 3 * * 0: Runs the script at 3:00 AM every Sunday.
  • 0 0 * * *: Runs the script at midnight every day.

Conclusion

Using Python scripts for routine tasks and maintenance helps automate critical processes such as backups, system updates, and log rotation. By scheduling these scripts with cron jobs, you ensure that these tasks are performed consistently and without manual intervention. This approach enhances the reliability and stability of your infrastructure, keeping it healthy and up-to-date.

版本聲明 本文轉載於:https://dev.to/pratik_nalawade/python-for-devops-15hg?1如有侵犯,請聯絡[email protected]刪除
最新教學 更多>
  • 網頁抓取 - 有趣!
    網頁抓取 - 有趣!
    一個很酷的術語: CRON = 依指定時間間隔自動安排任務的程式設計技術 網路什麼? 在研究專案等時,我們通常會從各個網站編寫資訊 - 無論是日記/Excel/文件等。 我們正在抓取網路並手動提取資料。 網路抓取正在自動化這個過程。 例子 當在網路上搜尋運動鞋時...
    程式設計 發佈於2024-11-06
  • 感言網格部分
    感言網格部分
    ?在學習 CSS 網格時剛剛完成了這個推薦網格部分的建立! ?網格非常適合建立結構化佈局。 ?現場示範:https://courageous-chebakia-b55f43.netlify.app/ ? GitHub:https://github.com/khanimran17/Testimoni...
    程式設計 發佈於2024-11-06
  • 為什麼 REGISTER_GLOBALS 被認為是 PHP 中的主要安全風險?
    為什麼 REGISTER_GLOBALS 被認為是 PHP 中的主要安全風險?
    REGISTER_GLOBALS 的危險REGISTER_GLOBALS 是一個 PHP 設定,它允許所有 GET 和 POST 變數在 PHP 腳本中用作全域變數。此功能可能看起來很方便,但由於潛在的安全漏洞和編碼實踐,強烈建議不要使用它。 為什麼 REGISTER_GLOBALS 不好? REG...
    程式設計 發佈於2024-11-06
  • Nodemailer 概述:在 Node.js 中輕鬆發送電子郵件
    Nodemailer 概述:在 Node.js 中輕鬆發送電子郵件
    Nodemailer 是用於發送電子郵件的 Node.js 模組。以下是快速概述: Transporter:定義電子郵件的傳送方式(透過 Gmail、自訂 SMTP 等)。 const transporter = nodemailer.createTransport({ ... }); 訊息物...
    程式設計 發佈於2024-11-06
  • JavaScript 中的輕鬆錯誤處理:安全賦值運算子如何簡化您的程式碼
    JavaScript 中的輕鬆錯誤處理:安全賦值運算子如何簡化您的程式碼
    JavaScript 中的錯誤處理可能很混亂。將大塊程式碼包裝在 try/catch 語句中是可行的,但隨著專案的成長,調試就變成了一場噩夢。幸運的是,有更好的方法。輸入 安全賦值運算子 (?=) - 一種更乾淨、更有效的錯誤處理方法,可將程式碼保持可讀性並簡化偵錯。 什麼是安全賦...
    程式設計 發佈於2024-11-06
  • Javascript 很難(有悲傷)
    Javascript 很難(有悲傷)
    这将是一个很长的阅读,但让我再说一遍。 JAVASCRIPT很难。上次我们见面时,我正在踏入 Javascript 的世界,一个眼睛明亮、充满希望的程序员踏入野生丛林,说“这能有多难?”。我错得有多离谱??事情变得更难了,我(勉强)活了下来,这是关于我的旅程的一个小混乱的故事。 变量:疯狂的开始 ...
    程式設計 發佈於2024-11-06
  • ## 您可以在不使用 JavaScript 的情況下使用 CSS 建立餅圖分段嗎?
    ## 您可以在不使用 JavaScript 的情況下使用 CSS 建立餅圖分段嗎?
    使用 CSS 在圓中分段使用 border-radius 在 CSS 中建立圓是一種常見的做法。但是,我們可以透過分段(類似餅圖)來實現類似的效果嗎?本文深入研究了僅透過 HTML 和 CSS 實現此目的的方法,不包括使用 JavaScript。 產生相等大小的段相等大小段的一種方法涉及產生以下內容...
    程式設計 發佈於2024-11-06
  • 從頭開始建立一個小型向量存儲
    從頭開始建立一個小型向量存儲
    With the evolving landscape of generative AI, vector databases are playing crucial role in powering generative AI applications. There are so many vect...
    程式設計 發佈於2024-11-06
  • 如何在Chrome使用AI實驗API
    如何在Chrome使用AI實驗API
    要在 Chrome 中使用實驗性 AI API,請依照下列步驟操作: 硬體需求 4GB 記憶體 GPU可用 至少 22GB 空間 Windows 10.11 或 macOS Ventura 或更新版本(無 Linux 規格) 尚不支持: Chrome作業系統 Chrome iOS C...
    程式設計 發佈於2024-11-06
  • 評論:Adam Johnson 的《Boost Your Django DX》
    評論:Adam Johnson 的《Boost Your Django DX》
    書評很微妙。您不想破壞它,但您也想讓潛在讀者體驗所期待的內容。這是提供背景和保持興趣之間的巧妙平衡。我試圖在這篇評論中達到這種平衡,為您提供足夠的內容來吸引您,而不透露太多。 一個小背景故事:我第一次從 Djangonaut Space 的好朋友 Tim 那裡聽說這本書,並將其添加到我的閱讀清單中...
    程式設計 發佈於2024-11-06
  • 如何將陣列元素分組並組合多維數組中另一列的值?
    如何將陣列元素分組並組合多維數組中另一列的值?
    按列將數組元素分組並組合另一列中的值給定一個包含兩列嵌套數組的數組,任務是將基於特定列的子數組,並將每個組中另一列的值連接起來,產生以逗號分隔的清單。 考慮以下範例陣列:$array = [ ["444", "0081"], ["44...
    程式設計 發佈於2024-11-06
  • 三個新加入的例外功能
    三個新加入的例外功能
    從 JDK 7 開始,異常處理已擴展為三個新功能:自動資源管理、多重捕獲和更準確的重新拋出。 多重catch可讓您使用同一個catch子句擷取多個異常,避免程式碼重複。 要使用多重捕獲,請指定由 | 分隔的異常清單。在 catch 子句中。每個參數都是隱式最終參數。 用法範例:catch(f...
    程式設計 發佈於2024-11-06
  • 如何修復執行 ES6 程式碼時出現「意外的令牌匯出」錯誤?
    如何修復執行 ES6 程式碼時出現「意外的令牌匯出」錯誤?
    「排除意外的令牌匯出錯誤」嘗試在專案中執行ES6 程式碼時,可能會出現「意外的令牌導出”錯誤。此錯誤表示所使用的環境不支援 ES6 模組中使用的匯出關鍵字語法。 錯誤詳細資料以下程式碼片段舉例說明了錯誤的來源: export class MyClass { constructor() { ...
    程式設計 發佈於2024-11-06
  • 即使卸載後,VSCode 擴充功能也不會從檔案系統中刪除,我建立了一個解決方案!
    即使卸載後,VSCode 擴充功能也不會從檔案系統中刪除,我建立了一個解決方案!
    所以這是基於 vscode 的編輯器的問題。即使您卸載了擴充功能,它也會保留在檔案系統中,並隨著時間的推移堵塞您的系統。我創建了一個簡單的解決方案。執行此 python 腳本將刪除 vscode 上未安裝的擴充功能。 它適用於 VS Code、VS Code Insiders,也適用於 VSCod...
    程式設計 發佈於2024-11-06
  • 透過 GitHub Actions 按計畫更新網站內容
    透過 GitHub Actions 按計畫更新網站內容
    我想分享我建立一個自我永續的內容管理系統的旅程,該系統不需要傳統意義上的內容資料庫。 問題 該網站的內容(部落格文章和書籤)儲存在 Notion 資料庫中: 附書籤的資料庫 –  Notion UI 我試圖解決的問題是不必在添加每個書籤後手動部署網站。最重要的是 - 保持託管盡可能...
    程式設計 發佈於2024-11-06

免責聲明: 提供的所有資源部分來自互聯網,如果有侵犯您的版權或其他權益,請說明詳細緣由並提供版權或權益證明然後發到郵箱:[email protected] 我們會在第一時間內為您處理。

Copyright© 2022 湘ICP备2022001581号-3