」工欲善其事,必先利其器。「—孔子《論語.錄靈公》
首頁 > 程式設計 > 追求性能第三部分:C Force

追求性能第三部分:C Force

發佈於2024-11-02
瀏覽:511

The Quest for Performance Part III : C Force

In the two prior installments of this series, we considered the performance of floating operations in Perl,
Python and R in a toy example that computed the function cos(sin(sqrt(x))), where x was a very large array of 50M double precision floating numbers.
Hybrid implementations that delegated the arithmetic intensive part to C were among the most performant implementations. In this installment, we will digress slightly and look at the performance of a pure C code implementation of the toy example.
The C code will provide further insights about the importance of memory locality for performance (by default elements in a C array are stored in sequential addresses in memory, and numerical APIs such as PDL or numpy interface with such containers) vis-a-vis containers,
e.g. Perl arrays which do not store their values in sequential addresses in memory. Last, but certainly not least, the C code implementations will allow us to assess whether flags related to floating point operations for the low level compiler (in this case gcc) can affect performance.
This point is worth emphasizing: common mortals are entirely dependent on the choice of compiler flags when "piping" their "install" or building their Inline file. If one does not touch these flags, then one will be blissfully unaware of what they may missing, or pitfalls they may be avoiding.
The humble C file makefile allows one to make such performance evaluations explicitly.

The C code for our toy example is listed in its entirety below. The code is rather self-explanatory, so will not spend time explaining other than pointing out that it contains four functions for

  • Non-sequential calculation of the expensive function : all three floating pointing operations take place inside a single loop using one thread
  • Sequential calculations of the expensive function : each of the 3 floating point function evaluations takes inside a separate loop using one thread
  • Non-sequential OpenMP code : threaded version of the non-sequential code
  • Sequential OpenMP code: threaded of the sequential code

In this case, one may hope that the compiler is smart enough to recognize that the square root maps to packed (vectorized) floating pointing operations in assembly, so that one function can be vectorized using the appropriate SIMD instructions (note we did not use the simd program for the OpenMP codes).
Perhaps the speedup from the vectorization may offset the loss of performance from repeatedly accessing the same memory locations (or not).

#include 
#include 
#include 
#include 
#include 

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s \n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i 



A critical question is whether the use of fast floating compiler flags, a trick that trades speed for accuracy of the code, can affect performance.
Here is the makefile withut this compiler flag

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



and here is the one with this flag:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



And here are the results of running these two programs

  • Without -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • With -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

Note that one can use the fastmath in Numba code as follows (the default is fastmath=False):

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

A few points that are worth noting:

  • The -ffast-math gives major boost in performance (about 300% for both the single threaded and the multi-threaded code), but it can generate erroneous results
  • Fastmath also works in Numba, but should be avoided for the same reasons it should be avoided in any application that strives for accuracy
  • The sequential C single threaded code gives performance similar to the single threaded PDL and Numpy
  • Somewhat surprisingly, the sequential code is about 20% faster than the non-sequential code when the correct (non-fast) math is used.
  • Unsurprisingly, multi-threaded code is faster than single threaded code :)
  • I still cannot explain how numbas delivers a 50% performance premium over the C code of this rather simple function.

title: " The Quest for Performance Part III : C Force "

date: 2024-07-07

In the two prior installments of this series, we considered the performance of floating operations in Perl,
Python and R in a toy example that computed the function cos(sin(sqrt(x))), where x was a very large array of 50M double precision floating numbers.
Hybrid implementations that delegated the arithmetic intensive part to C were among the most performant implementations. In this installment, we will digress slightly and look at the performance of a pure C code implementation of the toy example.
The C code will provide further insights about the importance of memory locality for performance (by default elements in a C array are stored in sequential addresses in memory, and numerical APIs such as PDL or numpy interface with such containers) vis-a-vis containers,
e.g. Perl arrays which do not store their values in sequential addresses in memory. Last, but certainly not least, the C code implementations will allow us to assess whether flags related to floating point operations for the low level compiler (in this case gcc) can affect performance.
This point is worth emphasizing: common mortals are entirely dependent on the choice of compiler flags when "piping" their "install" or building their Inline file. If one does not touch these flags, then one will be blissfully unaware of what they may missing, or pitfalls they may be avoiding.
The humble C file makefile allows one to make such performance evaluations explicitly.

The C code for our toy example is listed in its entirety below. The code is rather self-explanatory, so will not spend time explaining other than pointing out that it contains four functions for

  • Non-sequential calculation of the expensive function : all three floating pointing operations take place inside a single loop using one thread
  • Sequential calculations of the expensive function : each of the 3 floating point function evaluations takes inside a separate loop using one thread
  • Non-sequential OpenMP code : threaded version of the non-sequential code
  • Sequential OpenMP code: threaded of the sequential code

In this case, one may hope that the compiler is smart enough to recognize that the square root maps to packed (vectorized) floating pointing operations in assembly, so that one function can be vectorized using the appropriate SIMD instructions (note we did not use the simd program for the OpenMP codes).
Perhaps the speedup from the vectorization may offset the loss of performance from repeatedly accessing the same memory locations (or not).

#include 
#include 
#include 
#include 
#include 

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s \n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i 



A critical question is whether the use of fast floating compiler flags, a trick that trades speed for accuracy of the code, can affect performance.
Here is the makefile withut this compiler flag

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



and here is the one with this flag:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



And here are the results of running these two programs

  • Without -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • With -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

Note that one can use the fastmath in Numba code as follows (the default is fastmath=False):

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

A few points that are worth noting:

  • The -ffast-math gives major boost in performance (about 300% for both the single threaded and the multi-threaded code), but it can generate erroneous results
  • Fastmath also works in Numba, but should be avoided for the same reasons it should be avoided in any application that strives for accuracy
  • The sequential C single threaded code gives performance similar to the single threaded PDL and Numpy
  • Somewhat surprisingly, the sequential code is about 20% faster than the non-sequential code when the correct (non-fast) math is used.
  • Unsurprisingly, multi-threaded code is faster than single threaded code :)
  • I still cannot explain how numbas delivers a 50% performance premium over the C code of this rather simple function.
版本聲明 本文轉載於:https://dev.to/chrisarg/the-quest-for-performance-part-iii-c-force-3lkf如有侵犯,請聯絡[email protected]刪除
最新教學 更多>
  • 為什麼我的 MySQL 查詢在 PHP 中回傳「資源 id #6」?
    為什麼我的 MySQL 查詢在 PHP 中回傳「資源 id #6」?
    在PHP 中回顯MySQL 回應的資源ID #6在PHP 中使用MySQL 擴充查詢資料庫時,您可能會遇到“Resource id #6”輸出而不是預期結果。發生這種情況是因為查詢傳回資源,而不是字串或數值。 回顯結果要顯示預期結果,您必須先使用下列指令取得資料提供的取得函數之一。其中一個函數是 m...
    程式設計 發佈於2024-11-08
  • 使用 React Query 建立 Feed 頁面
    使用 React Query 建立 Feed 頁面
    目标 在本文中,我们将探索如何使用 React Query 构建提要页面! 这是我们将要创建的内容: 本文不会涵盖构建应用程序所涉及的每个步骤和细节。 相反,我们将重点关注关键功能,特别是“无限滚动”和“滚动到顶部”功能。 如果您有兴趣咨询整个实现,您可以在此 GitHub 存...
    程式設計 發佈於2024-11-08
  • (SQL 查詢)Express.js 中的快取與索引
    (SQL 查詢)Express.js 中的快取與索引
    開發者您好,這是我在這個平台上的第一篇文章! ? 我想分享我在 Express.js 和 SQL 方面的令人驚訝的體驗。我是一名初學者開發人員,在為我的專案開發 API 時,我每天處理超過 20 萬個 API 請求。最初,我使用 Express.js API 設定了一個 SQLite 資料庫(約 ...
    程式設計 發佈於2024-11-08
  • 如何防止 Chrome 的自動填充更改您的字體?
    如何防止 Chrome 的自動填充更改您的字體?
    克服Chrome 的自動填充字體變更挑戰在Windows 上遇到Chrome 的自動填充功能時,您可能會遇到煩人的字體更改問題。將滑鼠懸停在已儲存的使用者名稱上時,字體大小和樣式會發生變化,從而破壞表單的對齊方式。雖然您可以對輸入套用固定寬度來緩解此問題,但更有效的解決方案是完全防止字體變更。 要實...
    程式設計 發佈於2024-11-08
  • 以下是一些適合您文章內容的基於問題的標題:

* 如何為 Spring Boot 應用程式配置上下文路徑?
* 如何使用自訂 Con 存取我的 Spring Boot 應用程式
    以下是一些適合您文章內容的基於問題的標題: * 如何為 Spring Boot 應用程式配置上下文路徑? * 如何使用自訂 Con 存取我的 Spring Boot 應用程式
    如何為Spring Boot 應用程式添加上下文路徑Spring Boot 提供了一種簡單的方法來設定應用程式的上下文根,允許它透過localhost:port/{app_name} 存取。操作方法如下:使用應用程式屬性:在src/main/resources 目錄中建立一個application....
    程式設計 發佈於2024-11-08
  • 程式碼日數:進階循環
    程式碼日數:進階循環
    2024 年 8 月 30 日星期五 我目前正在學習 Codecademy 全端工程師路徑的第二門課程。我最近完成了 JavaScript 語法 I 課程,並完成了 JavaScript 語法 II 中的陣列和循環作業。接下來是物件、迭代器、錯誤和調試、練習和三個挑戰項目。 今天的主要亮點是學習對...
    程式設計 發佈於2024-11-08
  • Angular Addicts # Angular 隱式函式庫,未來是獨立的等等
    Angular Addicts # Angular 隱式函式庫,未來是獨立的等等
    ?嘿,Angular Addict 夥伴 這是 Angular Addicts Newsletter 的第 29 期,這是一本每月精選的引起我注意的 Angular 資源合集。 (這裡是第28期、27期、26期) ?發佈公告 ? Angular 18...
    程式設計 發佈於2024-11-08
  • 機器學習中的 C++:逃離 Python 與 GIL
    機器學習中的 C++:逃離 Python 與 GIL
    介紹 當 Python 的全域解釋器鎖定 (GIL) 成為需要高並發或原始效能的機器學習應用程式的瓶頸時,C 提供了一個引人注目的替代方案。這篇部落格文章探討如何利用 C 語言進行 ML,並專注於效能、並發性以及與 Python 的整合。 閱讀完整的部落格! ...
    程式設計 發佈於2024-11-08
  • 如何在 Java HashMap 中將多個值對應到單一鍵?
    如何在 Java HashMap 中將多個值對應到單一鍵?
    HashMap 中將多個值對應到單一鍵在 Java 的 HashMap 中,每個鍵都與單一值關聯。但是,在某些情況下,您可能需要將多個值對應到單一鍵。以下是實現此目的的方法:多值映射方法:最簡單、最直接的方法是使用列表映射。這涉及創建一個 HashMap,其中的值是包含多個值的 ArrayList。...
    程式設計 發佈於2024-11-08
  • 如何使用 PHP 有效率地檢查檔案中的字串?
    如何使用 PHP 有效率地檢查檔案中的字串?
    如何在PHP 中檢查文件是否包含字串要確定文件中是否存在特定字串,讓我們探索一下解決方案和更有效的替代方案。 原始程式碼:提供的程式碼嘗試檢查透過逐行讀取檔案來判斷檔案中是否存在由變數 $id 表示的字串。但是,while 迴圈中的條件 (strpos($buffer, $id) === false...
    程式設計 發佈於2024-11-08
  • 小型 Swoole 實體管理器
    小型 Swoole 實體管理器
    我很高興向大家介紹 Small Swoole Entity Manager。 它是一個圍繞 Swoole(和 OpenSwoole)構建的 ORM。 它支援非同步連接到: MySQL Postgres Small Swoole Db(Swoole Tables 之上的關係層) 目前僅提供核心包...
    程式設計 發佈於2024-11-08
  • WebCodec - 發送和接收
    WebCodec - 發送和接收
    介绍 你好! ? 在本教程中,我将向您展示如何使用 WebCodec API 发送和接收视频。 首先让我们对服务器进行编码。 设置服务器 为了在对等点之间发送和接收数据包,我们需要一个 websocket 服务器。 为此,我们将使用 Nodejs 创建一个非常基...
    程式設計 發佈於2024-11-08
  • 為什麼 PHP ftp_put() 失敗:分析與解決問題
    為什麼 PHP ftp_put() 失敗:分析與解決問題
    PHP ftp_put 失敗:分析問題並解決它傳輸時ftp_put() 無法正常運行可能是一個令人沮喪的問題通過FTP 傳輸檔案。在 PHP 中,此問題的常見原因在於預設使用主動模式。 主動與被動模式傳輸主動模式指示 FTP 伺服器連接到指定連接埠上的用戶端。另一方面,被動模式讓伺服器偵聽隨機端口,...
    程式設計 發佈於2024-11-08
  • 如何從字串中刪除非數字字符,同時保留 Java 中的小數位分隔符?
    如何從字串中刪除非數字字符,同時保留 Java 中的小數位分隔符?
    在刪除Java 字串中的非數字字元時保留小數分隔符號使用Java 字串時,可能會出現您需要的情況刪除所有非數字字符,同時保留小數點分隔符。使用正規表示式和replaceAll()方法可以有效地實現這一點。 為了解決這個問題,我們可以使用以下程式碼片段:String str = "a12.3...
    程式設計 發佈於2024-11-08
  • 如何重新排列 MySQL 中的欄位以提高資料視覺性和查詢效率?
    如何重新排列 MySQL 中的欄位以提高資料視覺性和查詢效率?
    有效地重新排列 MySQL 列以增強可見性當列沒有最佳排序時,查詢大型資料庫可能會很麻煩。本文提供了一個全面的解決方案,可以輕鬆地重新排列現有列,優化表的可見性而不影響其資料完整性。 要修改列的位置,請使用「ALTER TABLE」指令,後面接著「MODIFY」子句。此語法允許您透過在指定的引用列之...
    程式設計 發佈於2024-11-08

免責聲明: 提供的所有資源部分來自互聯網,如果有侵犯您的版權或其他權益,請說明詳細緣由並提供版權或權益證明然後發到郵箱:[email protected] 我們會在第一時間內為您處理。

Copyright© 2022 湘ICP备2022001581号-3