dax: sharing one instance across multiple dax adapters#10650
dax: sharing one instance across multiple dax adapters#10650checkupup wants to merge 1 commit intothesofproject:mainfrom
Conversation
DAX adapters may be created across multiple pipelines simultaneously, but only one instance can be active at any given time. This approach prevents memory exhaustion issues that would arise if multiple DAX adapters were created concurrently. Signed-off-by: Jun Lai <jun.lai@dolby.com>
|
Can one of the admins verify this patch?
|
|
Before each process, every DAX adapter first attempts to acquire ownership. To prevent situations where another adapter is still using the owner when ownership is being acquired, ownership transfer is only permitted when the DAX instance is in a state where the owner_id is invalid. The DAX adapter in the Speaker pipeline has the highest priority; it sets the |
| dax_ctx->p_dax = shared_resource.instance; | ||
| dax_free(dax_ctx); /* free internal dax instance in dax_ctx if it is valid */ | ||
| rfree(shared_resource.persist_buffer.addr); | ||
| rfree(shared_resource.scratch_buffer.addr); |
There was a problem hiding this comment.
mod_free()? Aah, ok, I understand now. You want to share these buffers so you don't want to bind them to a specific instance. Are you using the DP scheduler to run DAX? If yes - this is currently unsupported... And this isn't supported in userspace.
Having discussed locally a bit, looks like this is more or less the best you can get now - and this won't work in userspace. But we're working to add support for this
There was a problem hiding this comment.
Are you using the DP scheduler to run DAX?
Yes, DAX runs in a DP thread. The shared resources are aimed to access for all DAX adapters, and are attached to one of them at a time when control is gained.
En..., what do you mean of If yes - this is currently unsupported, why DAX cannot run using a DP scheduler? Sorry, I am a little confused.
There was a problem hiding this comment.
@checkupup sorry, I wasn't very clear. I didn't mean, that DP was unsupported. I meant that sharing memory between userspace (DP) modules is currently unsupported. And all DP modules are now running in userspace on PTL. I understand you aren't using PTL yet, so for now this is working for you. And yes, we're working to add support for that
| } | ||
| scratch_sz = dax_query_scratch_memory(dax_ctx); | ||
| if (dax_buffer_alloc(mod, &dax_ctx->scratch_buffer, scratch_sz) != 0) { | ||
| shared_resource.scratch_buffer.addr = rballoc(SOF_MEM_FLAG_LARGE_BUFFER, scratch_sz); |
There was a problem hiding this comment.
Oh, thanks, this should be safer.
| { | ||
| struct dax_adapter_data *adapter_data = module_get_private_data(mod); | ||
|
|
||
| return atomic_read(&shared_resource.owner) == adapter_data->owner_id; |
There was a problem hiding this comment.
maybe you can use mod->dev->ipc_config.id which is globally unique IIRC
There was a problem hiding this comment.
Thanks, that’s certainly true.
| if (force_owner == adapter_data->owner_id) | ||
| atomic_set(&shared_resource.force_owner, DAX_OWNER_ID_INVALID); | ||
| return 0; | ||
| } |
There was a problem hiding this comment.
this atomic variable manipulations look extremely suspicious to me. You check ownership and if free you take it. Whereas between the check and acquisition another instance could take it. I really don't think your use of atomic variables is correct in this commit. Just add a single mutex to protect the whole shared object.
There was a problem hiding this comment.
Just add a single mutex to protect the whole shared object
you are right.
DAX adapters may be created across multiple pipelines simultaneously, but only one instance can be active at any given time. This approach prevents memory exhaustion issues that would arise if multiple DAX adapters were created concurrently.