Skip to content
Snippets Groups Projects
Commit 4e4676d2 authored by Nikhil Devshatwar's avatar Nikhil Devshatwar Committed by Mauro Carvalho Chehab
Browse files

[media] media: ti-vpe: vpdma: Make list post atomic operation


Writing to the "VPDMA list attribute" register is considered as a list
post. This informs the VPDMA firmware to load the list from the address
which should be taken from the "VPDMA list address" register.

As these two register writes are dependent, it is important that the two
writes happen in atomic manner. This ensures multiple slices (which share
same VPDMA) can post lists asynchronously and all of them point to the
correct addresses.

Slightly modified to implementation for the original patch to use
spin_lock instead of mutex as the list post is also called from
interrupt context.

Signed-off-by: default avatarNikhil Devshatwar <nikhil.nd@ti.com>
Signed-off-by: default avatarBenoit Parrot <bparrot@ti.com>
Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@s-opensource.com>
parent dc12b124
No related branches found
No related tags found
No related merge requests found
...@@ -491,6 +491,7 @@ int vpdma_submit_descs(struct vpdma_data *vpdma, ...@@ -491,6 +491,7 @@ int vpdma_submit_descs(struct vpdma_data *vpdma,
struct vpdma_desc_list *list, int list_num) struct vpdma_desc_list *list, int list_num)
{ {
int list_size; int list_size;
unsigned long flags;
if (vpdma_list_busy(vpdma, list_num)) if (vpdma_list_busy(vpdma, list_num))
return -EBUSY; return -EBUSY;
...@@ -498,12 +499,14 @@ int vpdma_submit_descs(struct vpdma_data *vpdma, ...@@ -498,12 +499,14 @@ int vpdma_submit_descs(struct vpdma_data *vpdma,
/* 16-byte granularity */ /* 16-byte granularity */
list_size = (list->next - list->buf.addr) >> 4; list_size = (list->next - list->buf.addr) >> 4;
spin_lock_irqsave(&vpdma->lock, flags);
write_reg(vpdma, VPDMA_LIST_ADDR, (u32) list->buf.dma_addr); write_reg(vpdma, VPDMA_LIST_ADDR, (u32) list->buf.dma_addr);
write_reg(vpdma, VPDMA_LIST_ATTR, write_reg(vpdma, VPDMA_LIST_ATTR,
(list_num << VPDMA_LIST_NUM_SHFT) | (list_num << VPDMA_LIST_NUM_SHFT) |
(list->type << VPDMA_LIST_TYPE_SHFT) | (list->type << VPDMA_LIST_TYPE_SHFT) |
list_size); list_size);
spin_unlock_irqrestore(&vpdma->lock, flags);
return 0; return 0;
} }
...@@ -1090,6 +1093,7 @@ struct vpdma_data *vpdma_create(struct platform_device *pdev, ...@@ -1090,6 +1093,7 @@ struct vpdma_data *vpdma_create(struct platform_device *pdev,
vpdma->pdev = pdev; vpdma->pdev = pdev;
vpdma->cb = cb; vpdma->cb = cb;
spin_lock_init(&vpdma->lock);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "vpdma"); res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "vpdma");
if (res == NULL) { if (res == NULL) {
......
...@@ -35,6 +35,7 @@ struct vpdma_data { ...@@ -35,6 +35,7 @@ struct vpdma_data {
struct platform_device *pdev; struct platform_device *pdev;
spinlock_t lock;
/* callback to VPE driver when the firmware is loaded */ /* callback to VPE driver when the firmware is loaded */
void (*cb)(struct platform_device *pdev); void (*cb)(struct platform_device *pdev);
}; };
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment