」工欲善其事,必先利其器。「—孔子《論語.錄靈公》
首頁 > 程式設計 > 追求性能第三部分:C Force

追求性能第三部分:C Force

發佈於2024-11-02
瀏覽:394

The Quest for Performance Part III : C Force

In the two prior installments of this series, we considered the performance of floating operations in Perl,
Python and R in a toy example that computed the function cos(sin(sqrt(x))), where x was a very large array of 50M double precision floating numbers.
Hybrid implementations that delegated the arithmetic intensive part to C were among the most performant implementations. In this installment, we will digress slightly and look at the performance of a pure C code implementation of the toy example.
The C code will provide further insights about the importance of memory locality for performance (by default elements in a C array are stored in sequential addresses in memory, and numerical APIs such as PDL or numpy interface with such containers) vis-a-vis containers,
e.g. Perl arrays which do not store their values in sequential addresses in memory. Last, but certainly not least, the C code implementations will allow us to assess whether flags related to floating point operations for the low level compiler (in this case gcc) can affect performance.
This point is worth emphasizing: common mortals are entirely dependent on the choice of compiler flags when "piping" their "install" or building their Inline file. If one does not touch these flags, then one will be blissfully unaware of what they may missing, or pitfalls they may be avoiding.
The humble C file makefile allows one to make such performance evaluations explicitly.

The C code for our toy example is listed in its entirety below. The code is rather self-explanatory, so will not spend time explaining other than pointing out that it contains four functions for

  • Non-sequential calculation of the expensive function : all three floating pointing operations take place inside a single loop using one thread
  • Sequential calculations of the expensive function : each of the 3 floating point function evaluations takes inside a separate loop using one thread
  • Non-sequential OpenMP code : threaded version of the non-sequential code
  • Sequential OpenMP code: threaded of the sequential code

In this case, one may hope that the compiler is smart enough to recognize that the square root maps to packed (vectorized) floating pointing operations in assembly, so that one function can be vectorized using the appropriate SIMD instructions (note we did not use the simd program for the OpenMP codes).
Perhaps the speedup from the vectorization may offset the loss of performance from repeatedly accessing the same memory locations (or not).

#include 
#include 
#include 
#include 
#include 

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s \n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i 



A critical question is whether the use of fast floating compiler flags, a trick that trades speed for accuracy of the code, can affect performance.
Here is the makefile withut this compiler flag

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



and here is the one with this flag:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



And here are the results of running these two programs

  • Without -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • With -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

Note that one can use the fastmath in Numba code as follows (the default is fastmath=False):

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

A few points that are worth noting:

  • The -ffast-math gives major boost in performance (about 300% for both the single threaded and the multi-threaded code), but it can generate erroneous results
  • Fastmath also works in Numba, but should be avoided for the same reasons it should be avoided in any application that strives for accuracy
  • The sequential C single threaded code gives performance similar to the single threaded PDL and Numpy
  • Somewhat surprisingly, the sequential code is about 20% faster than the non-sequential code when the correct (non-fast) math is used.
  • Unsurprisingly, multi-threaded code is faster than single threaded code :)
  • I still cannot explain how numbas delivers a 50% performance premium over the C code of this rather simple function.

title: " The Quest for Performance Part III : C Force "

date: 2024-07-07

In the two prior installments of this series, we considered the performance of floating operations in Perl,
Python and R in a toy example that computed the function cos(sin(sqrt(x))), where x was a very large array of 50M double precision floating numbers.
Hybrid implementations that delegated the arithmetic intensive part to C were among the most performant implementations. In this installment, we will digress slightly and look at the performance of a pure C code implementation of the toy example.
The C code will provide further insights about the importance of memory locality for performance (by default elements in a C array are stored in sequential addresses in memory, and numerical APIs such as PDL or numpy interface with such containers) vis-a-vis containers,
e.g. Perl arrays which do not store their values in sequential addresses in memory. Last, but certainly not least, the C code implementations will allow us to assess whether flags related to floating point operations for the low level compiler (in this case gcc) can affect performance.
This point is worth emphasizing: common mortals are entirely dependent on the choice of compiler flags when "piping" their "install" or building their Inline file. If one does not touch these flags, then one will be blissfully unaware of what they may missing, or pitfalls they may be avoiding.
The humble C file makefile allows one to make such performance evaluations explicitly.

The C code for our toy example is listed in its entirety below. The code is rather self-explanatory, so will not spend time explaining other than pointing out that it contains four functions for

  • Non-sequential calculation of the expensive function : all three floating pointing operations take place inside a single loop using one thread
  • Sequential calculations of the expensive function : each of the 3 floating point function evaluations takes inside a separate loop using one thread
  • Non-sequential OpenMP code : threaded version of the non-sequential code
  • Sequential OpenMP code: threaded of the sequential code

In this case, one may hope that the compiler is smart enough to recognize that the square root maps to packed (vectorized) floating pointing operations in assembly, so that one function can be vectorized using the appropriate SIMD instructions (note we did not use the simd program for the OpenMP codes).
Perhaps the speedup from the vectorization may offset the loss of performance from repeatedly accessing the same memory locations (or not).

#include 
#include 
#include 
#include 
#include 

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s \n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i 



A critical question is whether the use of fast floating compiler flags, a trick that trades speed for accuracy of the code, can affect performance.
Here is the makefile withut this compiler flag

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



and here is the one with this flag:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



And here are the results of running these two programs

  • Without -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • With -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

Note that one can use the fastmath in Numba code as follows (the default is fastmath=False):

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

A few points that are worth noting:

  • The -ffast-math gives major boost in performance (about 300% for both the single threaded and the multi-threaded code), but it can generate erroneous results
  • Fastmath also works in Numba, but should be avoided for the same reasons it should be avoided in any application that strives for accuracy
  • The sequential C single threaded code gives performance similar to the single threaded PDL and Numpy
  • Somewhat surprisingly, the sequential code is about 20% faster than the non-sequential code when the correct (non-fast) math is used.
  • Unsurprisingly, multi-threaded code is faster than single threaded code :)
  • I still cannot explain how numbas delivers a 50% performance premium over the C code of this rather simple function.
版本聲明 本文轉載於:https://dev.to/chrisarg/the-quest-for-performance-part-iii-c-force-3lkf如有侵犯,請聯絡[email protected]刪除
最新教學 更多>
  • 如何限制動態大小的父元素中元素的滾動範圍?
    如何限制動態大小的父元素中元素的滾動範圍?
    在交互式接口中實現垂直滾動元素的CSS高度限制問題:考慮一個佈局,其中我們具有與用戶垂直滾動一起移動的可滾動地圖div,同時與固定的固定sidebar保持一致。但是,地圖的滾動無限期擴展,超過了視口的高度,阻止用戶訪問頁面頁腳。 $("#map").css({ margin...
    程式設計 發佈於2025-07-02
  • MySQL中如何高效地根據兩個條件INSERT或UPDATE行?
    MySQL中如何高效地根據兩個條件INSERT或UPDATE行?
    在兩個條件下插入或更新或更新 solution:的答案在於mysql的插入中...在重複鍵更新語法上。如果不存在匹配行或更新現有行,則此功能強大的功能可以通過插入新行來進行有效的數據操作。如果違反了唯一的密鑰約束。 實現所需的行為,該表必須具有唯一的鍵定義(在這種情況下為'名稱'...
    程式設計 發佈於2025-07-02
  • 如何處理PHP文件系統功能中的UTF-8文件名?
    如何處理PHP文件系統功能中的UTF-8文件名?
    在PHP的Filesystem functions中處理UTF-8 FileNames 在使用PHP的MKDIR函數中含有UTF-8字符的文件很多flusf-8字符時,您可能會在Windows Explorer中遇到comploreer grounder grounder grounder gro...
    程式設計 發佈於2025-07-02
  • 在Pandas中如何將年份和季度列合併為一個週期列?
    在Pandas中如何將年份和季度列合併為一個週期列?
    pandas data frame thing commans date lay neal and pree pree'和pree pree pree”,季度 2000 q2 這個目標是通過組合“年度”和“季度”列來創建一個新列,以獲取以下結果: [python中的concate...
    程式設計 發佈於2025-07-02
  • 如何克服PHP的功能重新定義限制?
    如何克服PHP的功能重新定義限制?
    克服PHP的函數重新定義限制在PHP中,多次定義一個相同名稱的函數是一個no-no。嘗試這樣做,如提供的代碼段所示,將導致可怕的“不能重新列出”錯誤。 但是,PHP工具腰帶中有一個隱藏的寶石:runkit擴展。它使您能夠靈活地重新定義函數。 runkit_function_renction_...
    程式設計 發佈於2025-07-02
  • 為什麼Microsoft Visual C ++無法正確實現兩台模板的實例?
    為什麼Microsoft Visual C ++無法正確實現兩台模板的實例?
    The Mystery of "Broken" Two-Phase Template Instantiation in Microsoft Visual C Problem Statement:Users commonly express concerns that Micro...
    程式設計 發佈於2025-07-02
  • 反射動態實現Go接口用於RPC方法探索
    反射動態實現Go接口用於RPC方法探索
    在GO 使用反射來實現定義RPC式方法的界面。例如,考慮一個接口,例如:鍵入myService接口{ 登錄(用戶名,密碼字符串)(sessionId int,錯誤錯誤) helloworld(sessionid int)(hi String,錯誤錯誤) } 替代方案而不是依靠反射...
    程式設計 發佈於2025-07-02
  • 左連接為何在右表WHERE子句過濾時像內連接?
    左連接為何在右表WHERE子句過濾時像內連接?
    左JOIN CONUNDRUM:WITCHING小時在數據庫Wizard的領域中變成內在的加入很有趣,當將c.foobar條件放置在上面的Where子句中時,據說左聯接似乎會轉換為內部連接。僅當滿足A.Foo和C.Foobar標準時,才會返回結果。 為什麼要變形?關鍵在於其中的子句。當左聯接的右側...
    程式設計 發佈於2025-07-02
  • 如何使用Python的請求和假用戶代理繞過網站塊?
    如何使用Python的請求和假用戶代理繞過網站塊?
    如何使用Python的請求模擬瀏覽器行為,以及偽造的用戶代理提供了一個用戶 - 代理標頭一個有效方法是提供有效的用戶式header,以提供有效的用戶 - 設置,該標題可以通過browser和Acterner Systems the equestersystermery和操作系統。通過模仿像Chro...
    程式設計 發佈於2025-07-02
  • eval()vs. ast.literal_eval():對於用戶輸入,哪個Python函數更安全?
    eval()vs. ast.literal_eval():對於用戶輸入,哪個Python函數更安全?
    稱量()和ast.literal_eval()中的Python Security 在使用用戶輸入時,必須優先確保安全性。強大的Python功能Eval()通常是作為潛在解決方案而出現的,但擔心其潛在風險。 This article delves into the differences betwee...
    程式設計 發佈於2025-07-02
  • Java字符串非空且非null的有效檢查方法
    Java字符串非空且非null的有效檢查方法
    檢查字符串是否不是null而不是空的 if(str!= null && str.isementy())二手: if(str!= null && str.length()== 0) option 3:trim()。 isement(Isement() trim whitespace whites...
    程式設計 發佈於2025-07-02
  • C++成員函數指針正確傳遞方法
    C++成員函數指針正確傳遞方法
    如何將成員函數置於c 的函數時,接受成員函數指針的函數時,必須同時提供對象的指針,並提供指針和指針到函數。需要具有一定簽名的功能指針。要通過成員函數,您需要同時提供對象指針(此)和成員函數指針。這可以通過修改Menubutton :: SetButton()(如下所示:[&& && && &&華)...
    程式設計 發佈於2025-07-02
  • 如何從Google API中檢索最新的jQuery庫?
    如何從Google API中檢索最新的jQuery庫?
    從Google APIS 問題中提供的jQuery URL是版本1.2.6。對於檢索最新版本,以前有一種使用特定版本編號的替代方法,它是使用以下語法:獲取最新版本:未壓縮)While these legacy URLs still remain in use, it is recommended ...
    程式設計 發佈於2025-07-02
  • 在程序退出之前,我需要在C ++中明確刪除堆的堆分配嗎?
    在程序退出之前,我需要在C ++中明確刪除堆的堆分配嗎?
    在C中的顯式刪除 在C中的動態內存分配時,開發人員通常會想知道是否有必要在heap-procal extrable exit exit上進行手動調用“ delete”操作員,但開發人員通常會想知道是否需要手動調用“ delete”操作員。本文深入研究了這個主題。 在C主函數中,使用了動態分配變量(...
    程式設計 發佈於2025-07-02
  • Java的Map.Entry和SimpleEntry如何簡化鍵值對管理?
    Java的Map.Entry和SimpleEntry如何簡化鍵值對管理?
    A Comprehensive Collection for Value Pairs: Introducing Java's Map.Entry and SimpleEntryIn Java, when defining a collection where each element com...
    程式設計 發佈於2025-07-02

免責聲明: 提供的所有資源部分來自互聯網,如果有侵犯您的版權或其他權益,請說明詳細緣由並提供版權或權益證明然後發到郵箱:[email protected] 我們會在第一時間內為您處理。

Copyright© 2022 湘ICP备2022001581号-3