There is an issue in the current citp_fdtable_ctor function where we set citp_fdtable.size = rlim.rlim_max, which is typically 1048576. However, in the Linux kernel, when get_unused_fd_flags obtains an fd, it calls __alloc_fd(current->files, 0, rlimit(RLIMIT_NOFILE), flags), where rlimit(RLIMIT_NOFILE) retrieves tsk->signal->rlim[limit].rlim_cur, not rlim_max.
I understand this is done to accommodate applications like nginx that modify rlim_cur values through setrlimit calls. We could call realloc in the OO_INTERCEPT(setrlimit) function to reallocate memory for citp_fdtable.table. I think this approach would be more flexible, preventing potential malloc failures for citp_fdtable.table when users modify rlim_max to values like 0x3ffffff8. After all, what actually limits the maximum fd value for the current process is rlim_cur, not rlim_max.
We once implemented this approach in fstack without encountering any issues.
Of course, this is just my personal suggestion. If approved, I'll submit a PR.
There is an issue in the current citp_fdtable_ctor function where we set citp_fdtable.size = rlim.rlim_max, which is typically 1048576. However, in the Linux kernel, when get_unused_fd_flags obtains an fd, it calls __alloc_fd(current->files, 0, rlimit(RLIMIT_NOFILE), flags), where rlimit(RLIMIT_NOFILE) retrieves tsk->signal->rlim[limit].rlim_cur, not rlim_max.
I understand this is done to accommodate applications like nginx that modify rlim_cur values through setrlimit calls. We could call realloc in the OO_INTERCEPT(setrlimit) function to reallocate memory for citp_fdtable.table. I think this approach would be more flexible, preventing potential malloc failures for citp_fdtable.table when users modify rlim_max to values like 0x3ffffff8. After all, what actually limits the maximum fd value for the current process is rlim_cur, not rlim_max.
We once implemented this approach in fstack without encountering any issues.
Of course, this is just my personal suggestion. If approved, I'll submit a PR.