”工欲善其事,必先利其器。“—孔子《论语.录灵公》
首页 > 编程 > 追求性能第三部分:C Force

追求性能第三部分:C Force

发布于2024-11-02
浏览:765

The Quest for Performance Part III : C Force

In the two prior installments of this series, we considered the performance of floating operations in Perl,
Python and R in a toy example that computed the function cos(sin(sqrt(x))), where x was a very large array of 50M double precision floating numbers.
Hybrid implementations that delegated the arithmetic intensive part to C were among the most performant implementations. In this installment, we will digress slightly and look at the performance of a pure C code implementation of the toy example.
The C code will provide further insights about the importance of memory locality for performance (by default elements in a C array are stored in sequential addresses in memory, and numerical APIs such as PDL or numpy interface with such containers) vis-a-vis containers,
e.g. Perl arrays which do not store their values in sequential addresses in memory. Last, but certainly not least, the C code implementations will allow us to assess whether flags related to floating point operations for the low level compiler (in this case gcc) can affect performance.
This point is worth emphasizing: common mortals are entirely dependent on the choice of compiler flags when "piping" their "install" or building their Inline file. If one does not touch these flags, then one will be blissfully unaware of what they may missing, or pitfalls they may be avoiding.
The humble C file makefile allows one to make such performance evaluations explicitly.

The C code for our toy example is listed in its entirety below. The code is rather self-explanatory, so will not spend time explaining other than pointing out that it contains four functions for

  • Non-sequential calculation of the expensive function : all three floating pointing operations take place inside a single loop using one thread
  • Sequential calculations of the expensive function : each of the 3 floating point function evaluations takes inside a separate loop using one thread
  • Non-sequential OpenMP code : threaded version of the non-sequential code
  • Sequential OpenMP code: threaded of the sequential code

In this case, one may hope that the compiler is smart enough to recognize that the square root maps to packed (vectorized) floating pointing operations in assembly, so that one function can be vectorized using the appropriate SIMD instructions (note we did not use the simd program for the OpenMP codes).
Perhaps the speedup from the vectorization may offset the loss of performance from repeatedly accessing the same memory locations (or not).

#include 
#include 
#include 
#include 
#include 

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s \n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i 



A critical question is whether the use of fast floating compiler flags, a trick that trades speed for accuracy of the code, can affect performance.
Here is the makefile withut this compiler flag

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



and here is the one with this flag:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



And here are the results of running these two programs

  • Without -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • With -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

Note that one can use the fastmath in Numba code as follows (the default is fastmath=False):

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

A few points that are worth noting:

  • The -ffast-math gives major boost in performance (about 300% for both the single threaded and the multi-threaded code), but it can generate erroneous results
  • Fastmath also works in Numba, but should be avoided for the same reasons it should be avoided in any application that strives for accuracy
  • The sequential C single threaded code gives performance similar to the single threaded PDL and Numpy
  • Somewhat surprisingly, the sequential code is about 20% faster than the non-sequential code when the correct (non-fast) math is used.
  • Unsurprisingly, multi-threaded code is faster than single threaded code :)
  • I still cannot explain how numbas delivers a 50% performance premium over the C code of this rather simple function.

title: " The Quest for Performance Part III : C Force "

date: 2024-07-07

In the two prior installments of this series, we considered the performance of floating operations in Perl,
Python and R in a toy example that computed the function cos(sin(sqrt(x))), where x was a very large array of 50M double precision floating numbers.
Hybrid implementations that delegated the arithmetic intensive part to C were among the most performant implementations. In this installment, we will digress slightly and look at the performance of a pure C code implementation of the toy example.
The C code will provide further insights about the importance of memory locality for performance (by default elements in a C array are stored in sequential addresses in memory, and numerical APIs such as PDL or numpy interface with such containers) vis-a-vis containers,
e.g. Perl arrays which do not store their values in sequential addresses in memory. Last, but certainly not least, the C code implementations will allow us to assess whether flags related to floating point operations for the low level compiler (in this case gcc) can affect performance.
This point is worth emphasizing: common mortals are entirely dependent on the choice of compiler flags when "piping" their "install" or building their Inline file. If one does not touch these flags, then one will be blissfully unaware of what they may missing, or pitfalls they may be avoiding.
The humble C file makefile allows one to make such performance evaluations explicitly.

The C code for our toy example is listed in its entirety below. The code is rather self-explanatory, so will not spend time explaining other than pointing out that it contains four functions for

  • Non-sequential calculation of the expensive function : all three floating pointing operations take place inside a single loop using one thread
  • Sequential calculations of the expensive function : each of the 3 floating point function evaluations takes inside a separate loop using one thread
  • Non-sequential OpenMP code : threaded version of the non-sequential code
  • Sequential OpenMP code: threaded of the sequential code

In this case, one may hope that the compiler is smart enough to recognize that the square root maps to packed (vectorized) floating pointing operations in assembly, so that one function can be vectorized using the appropriate SIMD instructions (note we did not use the simd program for the OpenMP codes).
Perhaps the speedup from the vectorization may offset the loss of performance from repeatedly accessing the same memory locations (or not).

#include 
#include 
#include 
#include 
#include 

// simulates a large array of random numbers
double*  simulate_array(int num_of_elements,int seed);
// OMP environment functions
void _set_openmp_schedule_from_env();
void _set_num_threads_from_env();



// functions to modify C arrays 
void map_c_array(double* array, int len);
void map_c_array_sequential(double* array, int len);
void map_C_array_using_OMP(double* array, int len);
void map_C_array_sequential_using_OMP(double* array, int len);

int main(int argc, char *argv[]) {
    if (argc != 2) {
        printf("Usage: %s \n", argv[0]);
        return 1;
    }

    int array_size = atoi(argv[1]);
    // printf the array size
    printf("Array size: %d\n", array_size);
    double *array = simulate_array(array_size, 1234);

    // Set OMP environment
    _set_openmp_schedule_from_env();
    _set_num_threads_from_env();

    // Perform calculations and collect timing data
    double start_time, end_time, elapsed_time;
    // Non-Sequential calculation
    start_time = omp_get_wtime();
    map_c_array(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Non-sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_c_array_sequential(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation time: %f seconds\n", elapsed_time);
    free(array);

    array = simulate_array(array_size, 1234);
    // Parallel calculation using OMP
    start_time = omp_get_wtime();
    map_C_array_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Parallel calculation using OMP time: %f seconds\n", elapsed_time);
    free(array);

    // Sequential calculation using OMP
    array = simulate_array(array_size, 1234);
    start_time = omp_get_wtime();
    map_C_array_sequential_using_OMP(array, array_size);
    end_time = omp_get_wtime();
    elapsed_time = end_time - start_time;
    printf("Sequential calculation using OMP time: %f seconds\n", elapsed_time);

    free(array);
    return 0;
}



/*
*******************************************************************************
* OMP environment functions
*******************************************************************************
*/
void _set_openmp_schedule_from_env() {
  char *schedule_env = getenv("OMP_SCHEDULE");
  printf("Schedule from env %s\n", getenv("OMP_SCHEDULE"));
  if (schedule_env != NULL) {
    char *kind_str = strtok(schedule_env, ",");
    char *chunk_size_str = strtok(NULL, ",");

    omp_sched_t kind;
    if (strcmp(kind_str, "static") == 0) {
      kind = omp_sched_static;
    } else if (strcmp(kind_str, "dynamic") == 0) {
      kind = omp_sched_dynamic;
    } else if (strcmp(kind_str, "guided") == 0) {
      kind = omp_sched_guided;
    } else {
      kind = omp_sched_auto;
    }
    int chunk_size = atoi(chunk_size_str);
    omp_set_schedule(kind, chunk_size);
  }
}

void _set_num_threads_from_env() {
  char *num = getenv("OMP_NUM_THREADS");
  printf("Number of threads = %s from within C\n", num);
  omp_set_num_threads(atoi(num));
}
/*
*******************************************************************************
* Functions that modify C arrays whose address is passed from Perl in C
*******************************************************************************
*/

double*  simulate_array(int num_of_elements, int seed) {
  srand(seed); // Seed the random number generator
  double *array = (double *)malloc(num_of_elements * sizeof(double));
  for (int i = 0; i 



A critical question is whether the use of fast floating compiler flags, a trick that trades speed for accuracy of the code, can affect performance.
Here is the makefile withut this compiler flag

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native  -Wall -std=gnu11 -fopenmp -fstrict-aliasing 
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_noffmath_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_noffmath_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_noffmath_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



and here is the one with this flag:

CC = gcc
CFLAGS = -O3 -ftree-vectorize  -march=native -Wall -std=gnu11 -fopenmp -fstrict-aliasing -ffast-math
LDFLAGS = -fPIE -fopenmp
LIBS =  -lm

SOURCES = inplace_array_mod_with_OpenMP.c
OBJECTS = $(SOURCES:.c=_gcc.o)
EXECUTABLE = inplace_array_mod_with_OpenMP_gcc

all: $(SOURCES) $(EXECUTABLE)

clean:
    rm -f $(OBJECTS) $(EXECUTABLE)

$(EXECUTABLE): $(OBJECTS)
    $(CC) $(LDFLAGS) $(OBJECTS) $(LIBS) -o $@

%_gcc.o : %.c 
    $(CC) $(CFLAGS) -c $



And here are the results of running these two programs

  • Without -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_noffmath_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 1.12 seconds
Sequential calculation time: 0.95 seconds
Parallel calculation using OMP time: 0.17 seconds
Sequential calculation using OMP time: 0.15 seconds
  • With -ffast-math
OMP_SCHEDULE=guided,1 OMP_NUM_THREADS=8 ./inplace_array_mod_with_OpenMP_gcc 50000000
Array size: 50000000
Schedule from env guided,1
Number of threads = 8 from within C
Non-sequential calculation time: 0.27 seconds
Sequential calculation time: 0.28 seconds
Parallel calculation using OMP time: 0.05 seconds
Sequential calculation using OMP time: 0.06 seconds

Note that one can use the fastmath in Numba code as follows (the default is fastmath=False):

@njit(nogil=True,fastmath=True)
def compute_inplace_with_numba(array):
    np.sqrt(array,array)
    np.sin(array,array)
    np.cos(array,array)

A few points that are worth noting:

  • The -ffast-math gives major boost in performance (about 300% for both the single threaded and the multi-threaded code), but it can generate erroneous results
  • Fastmath also works in Numba, but should be avoided for the same reasons it should be avoided in any application that strives for accuracy
  • The sequential C single threaded code gives performance similar to the single threaded PDL and Numpy
  • Somewhat surprisingly, the sequential code is about 20% faster than the non-sequential code when the correct (non-fast) math is used.
  • Unsurprisingly, multi-threaded code is faster than single threaded code :)
  • I still cannot explain how numbas delivers a 50% performance premium over the C code of this rather simple function.
版本声明 本文转载于:https://dev.to/chrisarg/the-quest-for-performance-part-iii-c-force-3lkf如有侵犯,请联系[email protected]删除
最新教程 更多>
  • 软件开发中的左移测试:完整指南
    软件开发中的左移测试:完整指南
    左移测试是一种旨在通过将测试流程移至开发生命周期的早期,在问题升级之前解决问题来提高软件质量的策略。传统上,测试是在开发周期即将结束时进行的,但这通常会由于较晚发现缺陷而导致更高的成本和更长的时间。通过“左移”,团队旨在及早预防问题,培养主动而非被动的质量保证方法。 随着敏捷和 DevOps 方法...
    编程 发布于2024-11-08
  • Infusion 文档生成 CLI 工具
    Infusion 文档生成 CLI 工具
    Infusion 是一个开源工具,用于在代码文件中生成文档。它使用OpenAI gpt-4模型来编写注释。这是我的项目,我用 Python 编写的。 GitHub 链接: https://github.com/SychAndrii/infusion explainer.js 是一个开源工具,用于解释...
    编程 发布于2024-11-08
  • 掌握 Python 命令行界面 (CLI):综合指南
    掌握 Python 命令行界面 (CLI):综合指南
    介绍 Python 因其多功能性和易用性而广为人知,尤其是在构建命令行界面 (CLI) 应用程序时。无论您是想自动执行日常任务、构建开发人员工具还是创建灵活的脚本,Python 丰富的生态系统都提供了各种库来有效处理 CLI。 在这篇博文中,我们将深入探讨如何使用 Python 中...
    编程 发布于2024-11-08
  • 如何在 Pandas 中基于 If-Else-Else 条件创建列?
    如何在 Pandas 中基于 If-Else-Else 条件创建列?
    在 Pandas 中使用 If-Else-Else 条件创建列根据 if-elif-else 条件创建新列,主要有两种方法:非矢量化方法此方法涉及定义一个对行进行操作的函数:def f(row): if row['A'] == row['B']: val = 0 el...
    编程 发布于2024-11-08
  • 为什么我使用 Bootstrap Modals 会收到“TypeError: $(...).modal is Not a Function\”?
    为什么我使用 Bootstrap Modals 会收到“TypeError: $(...).modal is Not a Function\”?
    TypeError: $(...).modal is Not a Function with Bootstrap Modal问题当动态插入 Bootstrap 模态到另一个视图的 HTML,您可能会遇到以下错误: TypeError: $(...).modal is not a function。此...
    编程 发布于2024-11-08
  • 如何修复 cURL 错误 35:“SSL/TLS 握手中出现问题”?
    如何修复 cURL 错误 35:“SSL/TLS 握手中出现问题”?
    使用 cURL 解决 SSL/TLS 握手问题遇到臭名昭著的 cURL 错误 35,“SSL/TLS 握手中某处出现问题,”可能会令人沮丧。此错误消息表明即使 cURL 与 HTTP 协议完美配合,在 HTTPS 请求期间建立安全连接也存在困难。尝试的一种常见解决方案是将 CURLOPT_SSL_V...
    编程 发布于2024-11-08
  • 如何掌握 MERN 堆栈:全栈开发人员指南
    如何掌握 MERN 堆栈:全栈开发人员指南
    MERN 堆栈(MongoDB、Express.js、React.js、Node.js)已成为全栈 Web 开发最流行的技术之一。作为一名开发人员,学习 MERN 堆栈可以打开一个充满机遇的世界,并让您走上构建强大的动态 Web 应用程序的道路。以下是您如何掌握 MERN 堆栈并将您的全堆栈开发技能...
    编程 发布于2024-11-08
  • 如何确保您的 PHP 网站正确处理 UTF-8 编码?
    如何确保您的 PHP 网站正确处理 UTF-8 编码?
    确保您的 PHP 网站进行全面的 UTF-8 处理要针对 UTF-8 编码优化您的 PHP 网站,建议执行几个关键步骤.启用相关扩展:mbstring: 提供对多字节字符串的支持,包括编码转换和字符串操作。PHP配置(php.ini):default_charset:设置为“utf-8”确保默认输出...
    编程 发布于2024-11-08
  • 为什么我的 MySQL 查询在 PHP 中返回“资源 id #6”?
    为什么我的 MySQL 查询在 PHP 中返回“资源 id #6”?
    在 PHP 中回显 MySQL 响应的资源 ID #6在 PHP 中使用 MySQL 扩展查询数据库时,您可能会遇到“Resource id #6”输出而不是预期结果。发生这种情况是因为查询返回资源,而不是字符串或数值。回显结果要显示预期结果,您必须首先使用以下命令获取数据提供的获取函数之一。其中一...
    编程 发布于2024-11-08
  • (SQL 查询)Express.js 中的缓存与索引
    (SQL 查询)Express.js 中的缓存与索引
    开发者您好,这是我在这个平台上的第一篇文章! ? 我想分享我在 Express.js 和 SQL 方面的令人惊讶的体验。我是一名初学者开发人员,在为我的项目开发 API 时,我每天处理超过 20 万个 API 请求。最初,我使用 Express.js API 设置了一个 SQLite 数据库(约 4...
    编程 发布于2024-11-08
  • 以下是一些适合您文章内容的基于问题的标题:

* 如何为 Spring Boot 应用程序配置上下文路径?
* 如何使用自定义 Con 访问我的 Spring Boot 应用程序
    以下是一些适合您文章内容的基于问题的标题: * 如何为 Spring Boot 应用程序配置上下文路径? * 如何使用自定义 Con 访问我的 Spring Boot 应用程序
    如何向 Spring Boot 应用程序添加上下文路径Spring Boot 提供了一种简单的方法来设置应用程序的上下文根,允许它通过 localhost:port/{app_name} 访问。操作方法如下:使用应用程序属性:在 src/main/resources 目录中创建一个 applicat...
    编程 发布于2024-11-08
  • 代码日数:高级循环
    代码日数:高级循环
    2024 年 8 月 30 日星期五 我目前正在学习 Codecademy 全栈工程师路径的第二门课程。我最近完成了 JavaScript 语法 I 课程,并完成了 JavaScript 语法 II 中的数组和循环作业。接下来是对象、迭代器、错误和调试、练习和三个挑战项目。 今天的主要亮点是学习对我...
    编程 发布于2024-11-08
  • Angular Addicts # Angular 隐式库,未来是独立的等等
    Angular Addicts # Angular 隐式库,未来是独立的等等
    ?嘿,Angular Addict 伙伴 这是 Angular Addicts Newsletter 的第 29 期,这是一本每月精选的引起我注意的 Angular 资源合集。 (这里是第28期、27期、26期) ?发布公告 ? Angular 18...
    编程 发布于2024-11-08
  • 如何在 Java HashMap 中将多个值映射到单个键?
    如何在 Java HashMap 中将多个值映射到单个键?
    HashMap 中将多个值映射到单个键在 Java 的 HashMap 中,每个键都与单个值关联。但是,在某些情况下,您可能需要将多个值映射到单个键。以下是实现此目的的方法:多值映射方法:最简单、最直接的方法是使用列表映射。这涉及创建一个 HashMap,其中的值是包含多个值的 ArrayList。...
    编程 发布于2024-11-08
  • 如何使用 PHP 高效地检查文件中的字符串?
    如何使用 PHP 高效地检查文件中的字符串?
    如何在 PHP 中检查文件是否包含字符串要确定文件中是否存在特定字符串,让我们探索一下解决方案和更有效的替代方案。原始代码:提供的代码尝试检查通过逐行读取文件来判断文件中是否存在由变量 $id 表示的字符串。但是,while 循环中的条件 (strpos($buffer, $id) === fals...
    编程 发布于2024-11-08

免责声明: 提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发到邮箱:[email protected] 我们会第一时间内为您处理。

Copyright© 2022 湘ICP备2022001581号-3