Dr Saddam Bekhet, successfully passed viva

Congratulations to Saddam who successfully passed his PhD VIVA on 23rd May 2016.

Examiners have commended Saddam’s work and contributions. They also emphasized how well written the thesis is.

A well-deserved achievement Saddam, well done.

And all the best for your future career.

 

PhD Studentship – “Object and Action Recognition”

Summary:  “Object and Action Recognition Assisted by Computational Linguistics”.

The aim of this project is to investigate how computer vision methods such as object and
action recognition may be assisted by computational linguistic models, such as WordNet.
The main challenge of object and action recognition is the scalability of methods from
dealing with a dozen of categories (e.g. PASCAL VOC) to thousands of concepts (e.g.
ImageNet ILSVRC). This project is expected to contribute to the application of automated
visual content annotation and more widely to bridging the semantic gap between
computational approaches of vision and language.

Deadline: 20th March 2015.

A PhD studentship is advertised at

Object and Action Recognition Assisted by Computational Linguistics

The research is collaboration between University of Kingston (Digital Imaging Research Centre) and University of Lincoln (DCAPI group)

For details of the applications process: http://www.kingston.ac.uk/research/research-degrees/funding/phd-studentships-2015/faqs/

Deadline: 20th March 2015

 

Showcase Event

Amr organised the Annual Showcase Event for the School of Computer Science, University of Lincoln. (14th and 15th May). This year, the event also featured the Postgraduates by Research (PGRs) presenting their research work and  demonstrating some interactive demos.

Amr has been organising the Showcase Event for a number of years
Amr has been organising the Showcase Event for a number of years

 

Amr arranging the "Welcome and Registration" Table for visitors and companies representatives.
Amr arranging the “Welcome and Registration” Table for visitors and companies representatives.

 

 

 

 

 

 

 

 

 

Members of DCAPI group have presented and showed their research work in the Annual Showcase Event for the School of Computer Science, University of Lincoln. (14th and 15th May).  Saddam also won the “Best Demo” prize for his video matching & retrieval interactive demo.  (more details and photos are available on the group’s website at http://dcapi.blogs.lincoln.ac.uk/2014/05/17/pgrs-showcase-event/)

 

Saddam receiving his "Certificate of Achievment" for "Best Demo"
Saddam receiving his “Certificate of Achievment” for “Best Demo”

 

 

 

 

 

 

 

 

 

New paper accepted in ICPR 2014 – “Compact Signature-based Compressed Video Matching Using Dominant Colour Profiles (DCP)”

The paper “Compact Signature-based Compressed Video Matching Using Dominant Colour Profiles (DCP)” has been accepted in the ICPR 2014 conference http://www.icpr2014.org/, and will be presented in August 2014, Stockholm, Sweden.

Abstract— This paper presents a technique for efficient and generic matching of compressed video shots, through compact signatures extracted directly without decompression. The compact signature is based on the Dominant Colour Profile (DCP); a sequence of dominant colours extracted and arranged as a sequence of spikes, in analogy to the human retinal representation of a scene. The proposed signature represents a given video shot with ~490 integer values, facilitating for real-time processing to retrieve a maximum set of matching videos. The technique is able to work directly on MPEG compressed videos, without full decompression, as it is utilizing the DC-image as a base for extracting colour features. The DC-image has a highly reduced size, while retaining most of visual aspects, and provides high performance compared to the full I-frame. The experiments and results on various standard datasets show the promising performance, both the accuracy and the efficient computation complexity, of the proposed technique.

Congratulations and well done for Saddam.

Analysis and experimentation results of using DC-image, and comparisons with full image (I-Frame), can be found in  Video matching using DC-image and local features   (http://eprints.lincoln.ac.uk/12680/)

 

 

Dr Amjad Altadmri – Graduation Ceremony

Congratulations to Dr Amjad Altadmri for completeing his PhD degree. Amjad received his PhD degree in the formal September Graduation Ceremony at the Lincoln Cathedral.

Amjad Graduation Ceremony - September 2013
Amjad Graduation Ceremony – September 2013

 

His PhD titled “Semantic Annotation of Domain-Independent Uncontrolled Videos, Incorporating Visual Similarity and Commonsesne Knowledge Bases”. The work produced a Framework for semantic video annotation. In addition, VisualNet was also produced, which is a semantic Network for Visual-related applications.

The photo shows Dr Amjad Altadmri (Left) with his Director of Studies/Supervisor Dr Amr Ahmed ( right).

A list of publications of Amjad’s PhD are below:

 

Amjad has also participated, with Amr and other members of the DCAPI group,  in various workshops especially the V&L EPSRC Network workshops. They presented sessions and showed posters; see related blog posts:

 

Best Student Paper Award 2013 – WCE 2013

Congratulations to Saddam Bekhet (PhD Researcher) who achieved the “Best Student Paper Award 2013″  for his conference paper entitled “Video Matching Using DC-image and Local Features ” presented earlier in “World Congress on Engineering 2013“ in London .
Award copy20130704_112542

 

 

Abstract: This paper presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin. There are also various optimisations that can be done to improve this computation complexity.

 

Conference paper presented WCE’13 – 3rd July 2013 – London

The paper (titled “Video Matching Using DC-image and Local Features”) was presented by Saddam Bekhet (PhD Rsearcher) in the International Conference of Signal and Image Engineering (ICSIE’13), during the World Congress on Engineering 2013, in London UK.

Abstract:

This paper presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin. There are also various optimisations that can be done to improve this computation complexity.

 

Well done Saddam.

Conference paper Accepted to the “World Congress on Engineering”

 New Conference paper accepted for publishing in  “World Congress on Engineering 2013“.

The paper title is “Video Matching Using DC-image and Local Features ”

Abstract:

This paper presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin. There are also various optimisations that can be done to improve this computation complexity.

Automatic Semantic Video Annotation

Automatic Semantic Video Annotation

Amjad Altadmri, Amr Ahmed*, Andrew Hunter

Poster - Click here to download PDF.
Poster – see link below to download PDF.

 

 

 

(Click Semantic Video Annotation-with Knowledge ” http://amrahmed.blogs.lincoln.ac.uk/files/2013/03/Semantic-Video-Annotation-with-Knowledge.pdf  ,  to download the pdf)

INTRODUCTION

The volume of video data is growing exponentially. This data need to be annotated to facilitate search and retrieval, so that we can quickly find a video whenever needed.

Manual Annotation, especially for such volume, is time consuming and would be expensive. Hence, automated annotation systems are required.

 

AIM

Automated Semantic Annotation of wide-domain videos (i.e. no domain restrictions). This is an important step towards bridging the “Semantic Gap” in video understanding.

 

METHOD

1. Extracting “Video Signature” for each  video.
2. Match signatures to find most similar  videos, with annotations
3. Analyse and process obtained annotations, in consultation with Common-sense knowledge-bases
4. Produce the suggested annotation.

EVALUATION

• Two standard, and challenging  Datasets  were used. TRECVID BBC Rush and UCF.
• Black-box and White-box testing carried out.
•Measures include: Precision, Confusion Matrix.

CONCLUSION

•Developed an Automatic Semantic Video Annotation framework.
•Not restricted to a specific domain videos.
•Utilising Common-sense Knowledge enhances scene understanding and improve semantic annotation.
Publications
  1. A framework for automatic semantic video annotation 
    Altadmri, Amjad and Ahmed, Amr (2013) A framework for automatic semantic video annotation. Multimedia Tools and Applications, 64 (2). ISSN 1380-7501.
  2. Semantic levels of domain-independent commonsense knowledgebase for visual indexing and retrieval applications 
    Altadmri, Amjad and Ahmed, Amr and Mohtasseb Billah, Haytham (2012) Semantic levels of domain-independent commonsense knowledgebase for visual indexing and retrieval applications. Neural Information Processing. Lecture Notes in Computer Science, 7663 . pp. 640-647. ISSN 0302-9743
  3. VisualNet: commonsense knowledgebase for video and image indexing and retrieval application 
    Alabdullah Altadmri, Amjad and Ahmed, Amr (2009) VisualNet: commonsense knowledgebase for video and image indexing and retrieval application. In: IEEE International Conference on Intelligent Computing and Intelligent Systems, 21-22 November 2009, Shanghai, China..
  4. Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases 
    Altadmri, Amjad and Ahmed, Amr (2009) Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases. In: The IEEE International Conference on Signal and Image Processing Applications (ICSIPA 2009), 18-19th November 2009, Malaysia.
  5. Video databases annotation enhancing using commonsense knowledgebases for indexing and retrieval 
    Altadmri, Amjad and Ahmed, Amr (2009) Video databases annotation enhancing using commonsense knowledgebases for indexing and retrieval. In: The 13th IASTED International Conference on Artificial Intelligence and Soft Computing., September 7 � 9, 2009, Palma de Mallorca, Spain.