Extraction of Video Semantic Content Using Fuzzy Rule Based Model

DOI : 10.17577/IJERTV2IS120944

Download Full-Text PDF Cite this Publication

Text Only Version

Extraction of Video Semantic Content Using Fuzzy Rule Based Model

Mrs. G. Shoba., M.E.,

Senior Asst. Prof (CSE Dept)

Kalaitchelvi. S

M. Tech Student (CSE Dept)

Christ college of Engineering and technology Affiliated to Pondicherry University

Abstract

Normally, there is a problem of searching digital videos in large database because there is a huge demand for content based video retrieval system. Proceed in video technology and the development of new video function lead to huge growth in the capacity of video data. To manage these data efficiently a suitable video data model is needed. Since the relational model and other traditional model cannot complete the constraint of video data. Here, we propose a semantic content extraction system that allows the user to query and recover the objects, events, concepts by establishing the fuzzy rule based model that uses the spatial and temporal relation in event and concept definition. In addition to that ontology, rule definitions are used to lower the spatial relation cost and are able to identify some difficult situation successfully.

Index term – ontology, Rule based model, semantic content extraction, video content, fuzzy.

  1. Introduction

    Mining is the process of extracting the knowledge from large volume of raw data. Mining the data can be make routine the process of discovering the relationships and modelling the raw data and the results can be either utilized in a preset decision support system. The amount of video content is being uploaded to the internet which is gradually increasing. The search engines that catalogue multimedia content such as YouTube that will directory the videos mainly based on physically assigned text tags. A massive amount of videos

    available on the web, it would be a time consuming process to tag all of them by yourself. The fact of raising the importance of content based search relies on automatic key frame extraction techniques. In this paper we introduce a semantic content based framework for video retrieval by using genetic algorithm. In this extraction technique mainly focuses on semantic modelling of video database applications by taking into account uncertainty information or issues occur in video data.

    ONTOLOGY concept is used to build a domain into a human understandable, machine-readable format which consists of entities, attributes, relationships and it is used as a pattern data depiction for the Semantic Web.

    Consider an example; a file or video can be very relevant, pertinent, or immaterial to this region the keywords are extracted which are corresponding to this research areas. However, it is inappropriate to treat all keywords equally as some keywords may be more significant than others. To deal with this type of problems, one possible solution is to integrate fuzzy logic into ontology to handle ambiguity data. Usually, fuzzy ontology is generated and used in content retrieval and search engines, in which membership values, minimum bounding rectangle data (MBR), frame number, type are used to assess the similarities between the concepts in a concept hierarchy.

    1. Fuzzy FCA

      FCA is a prescribed technique for data scrutiny and knowledge presentation. It defines proper contexts to signify relationships between objects

      and attributes in a domain. From the formal contexts, FCA can then generate formal concepts and interpret the corresponding concept lattice, so that information can be browsed or retrieved effectively. We apply the fuzzy logic to represent indistinct information and build a concept lattice.

      Fuzzy Formal Concept Analysis (FFCA), the ambiguity information is directly represented by a real number of membership value in the range of [0, 1]. As such, linguistic variables are no longer needed. Compared to the fuzzy concept lattice generated from the L-fuzzy context, the fuzzy concept lattice generated using FFCA will be simpler in terms of the number of formal concepts. It also supports a prescribed mechanism for calculating concept similarities.

    2. Fuzzy Conceptual Clustering

      As in traditional concept lattice, the fuzzy concept lattice generated using FFCA is sometimes quite

      In this paper we propose a fuzzy rule based model to improve the relation between the object and event in video extraction process. The remainder of this paper is organized as follow: section 2 Discuss previous work on video modelling. Section 3 Discuss the framework of semantic content. Finally, section 4 concludes this paper with some suggestions for further improvement.

  2. Related Studies

    1. Video Segmentation

      The video content model are broadly classified into two types, namely Low level features and High level semantic content. The low level features include the audio, visual features such as texture, shape, motion etc, and high level semantic content include the objects and events in video. The hierarchical structure of video is represented as follows

      complicated due to the large number of fuzzy

      formal concepts generated. Since the formal Video

      concepts are generated mathematically, objects that have small differences in terms of attribute values

      are classified into distinct formal concepts. Such

      Sequence

      objects should belong to the same concept when they are interpreted by soul. Thus, we cluster

      formal concepts into conceptual clusters using

      Scence

      fuzzy conceptual clustering. Compared to traditional clusters, the conceptual clusters

      generated have the following properties

      Each conceptual cluster is considered as a human interpretable concept in the domain of the fuzzy concept lattice.

      Each conceptual cluster is a sublattice extracted from the fuzzy concept lattice.

      A formal concept must belong to at least one conceptual cluster. For example, a scientific document can belong to more than one research area.

      Conceptual clusters are generated based on the premise that if a formal concept A belongs to a conceptual cluster R, then its subconcept B also belongs to R if B is similar to A. We can use a similarity confidence threshold Ts to determine whether two concepts are similar or not.

      Shot

      Frame

      Figure 1 Hierarchical Representation of Video

      Some of the existing video content models such as OVID (Object Oriented Video Information Database), Algebraic video system, Video STAR (Video Storage and Retrieval), AVIS (Advanced Video Information System), Bilvideo are used for extraction.

      AVIS (Advanced Video Information System) [2] is an object-based video data model that can be used for any kind of video data. In the model, the main focus is on objects, events, and activities, called entities, appearing in the video.

      Bilvideo [3] is a video database system developed and its main contribution is perhaps the advanced,

      rule-based spatio-temporal modelling and querying functionality it provides, but it also includes more conventional temporal semantic explanation. The spatiotemporal clarification is based on specifying minimum bounding rectangles (MBRs) for salient objects (objects the user is interested in) in each video frame. Based on these MBRs, the video is partitioned into segments. Within a segment, there is no significant change in the spatial relations between the MBRs, and each segment is represented by a key frame. Also based on the MBRs, spatial relations for each segment are extracted and stored as Prolog facts.

      OVID (Object-oriented Video Information Database) proposed in [4], introduces a video- objectmodel and a prototype video-object database system. A video frame sequence is considered as a video object, which has its own attributes. The model is schemaless so an object-oriented class hierarchy is not used as a database schema. Data is shared among video objects by using interval inclusion inheritance. A set of composition operations is used for video objects such as interval projection, merge, and overlap. A query language VideoSQL is also introduced to query video objects.

      VideoStar proposed in [5], introduces a generic

      like to explain why we chose a semantic web based framework rather than a relational database framework. The main reason for this is the dynamic nature of the underlying data that is to be modeled. It is hard to predict the concepts that a system would learn from the extracted features from a video. As the semantic concepts are learned on the fly as and when the features are extracted, semantic web provides a more flexible data storage technique when compared to a relational database. The second reason is to impart the framework the ability to extend when new feature extraction techniques are introduced without modifying the underlying data model and structures. The third reason is scalability. As the amount of video content to be indexed is huge, indexing techniques need to be scalable. When compared to relational database, semantic web is easier to scale as it was built for handling very large amount of data. By using semantic web, we observe that we could easily scale the system by using a dispersed file system and adding computers as and when needed. In short, the issue of content based search is an ideal candidate to go for a semantic web based implementation.

      Video

      video model not only supporting semantics but also the structure of video documents. They use

      sequence-scene-shot hierarchy, which is a well-

      known method for representation of hierarchical structure of video data, and define shot as a contiguous sequence of frames representing a continuous action in time and space. Scenes are constructed by shots, which are related in time and space. The semantically closer scenes are combined to construct a sequence, which describes a continuing story. The model in [5] is represented by enhanced ER model. Annotations (Person, Location, and Event) are associated with video segments to support indexing. The annotation related classes may be extended for any application domain.

    2. Semantic content

      It is obtained by textual annotation or by complex inference procedure based on visual content of video. The search will analyze the actual content of video. Here, content refers to color, shape and texture. Before describing the framework we would

      Object Extraction

      Event and Concept Extraction

      Figure 2 Automatic Semantic Content Extraction Process

  3. Automatic Semantic Content Framework

    1. Object Extraction

      It is also known as spatial extraction. Object is used as the input for the extraction process. However, the object extraction process is the range of mainframe image and examination techniques. In order to meet the object extraction process a

      supervised learning approach is used that uses genetic algorithm in order to classify object extraction and classification and the object instance are stored in the repository and this object instance consists of type, membership value, Minimum Bounding Rectangle, frame number, certainty value.

    2. Event Extraction

      It is also known as temporal extraction. Event instances are take out after a sequence of automatic extraction processes. During the extraction process, the semantic content is pulling out with a certainty degree between 0 and 1. A mined event instance is represented with a type, a frame set representing the events interval, a membership value and the position of the objects involved in the event.

    3. Concept Extraction

      The relation between object and event instance are known as concept extraction. In addition to that matching the similarity of the individuals is utilized in order to extract more concepts from the extracted

      components and the final step in this extraction

        1. Rule Based Modelling

          Rules are used to improve the capabilities of video extraction.

        2. Rule Extraction from Mining

      To build a rule based classifier by extracting IF- THEN rules from decision tree. To extract rule from a decision tree the following points are remembered

      One rule is created for each path from the root to the leaf node.

      To from the rule antecedent each splitting criterion is logically ANDed.

      The leaf node holds the class prediction, forming the rule consequent.

      INFERENCE ENGINE

      Similarity

      process is executing concept rule definitions.

  4. Video Semantic Content Model (VISCOM)

    It is ontology based modelling in which the VISCOM contains the class and relations. It provides an alternative method called rule based for domain independent extraction and it is easy to construct the rule process and make use of larger video data. This model is developed for uncertainty issues which occur in the video database

    application and this model provides fuzzy class and

    Semantic Content Video Extraction

    Rule base Rule (if x then y) Rule (if y then z)

    Matching

    Selection

    Execution

    properties.

    In this model we have to calculate the spatial movement, spatial change, spatial change period, and spatial relation that represents the object for spatial extraction and like temporal extraction we have to calculate the temporal relation, temporal spatial change component that represents the event.

    The relation between spatial and temporal extraction represents the concept extraction. The output of the extraction process consists of set of semantic process is stored in the repository.

    Figure 3 Representation of a Rule Based model

    Rule-based classifier makes use of set of IF-THEN rules for classification. The rule in the following from

    IF condition THEN conclusion

    Here each rule consists of head and body. Head contains the syntax while the body represents the number of class and property. Fuzzy rules are done in the body part. Rule definitions are used for two different purposes. The first purpose is to exact extraction of content includes the number of class and property. The second purpose defines the number of similarities are taken to define some

    complex situation. If similarity is matching then fuzzy rule is applied otherwise the condition will not match the concept.

  5. Conclusion and Future Work Automatic semantic content framework is the primary aim to develop in various areas such as sports video and new video applications. An automatic genetic algorithm based object extraction method is integrated to the proposed system to capture semantic content.

    As a further study, one can improve the model and the extraction capability of the framework and additionally adding the rule concepts for temporal extraction to reduce the rebounding method.

  6. Reference

  1. Yakup Yildirim, Adnan Yazici, Senior Member, Automatic Semantic Content Extraction in Video Using a Fuzzy Ontology and Rule Based Model, Vol 25, No.1 Jan.2013.

  2. M. Koprulu ¨, N.K. Cicekli, and A. Yazici, Spatio-Temporal Querying in Video Databases, Information Sciences, vol. 160, nos. 1-4, pp. 131-152, 2004.

  3. T. Sevilmis, M. Bastan, U. Gu ¨du ¨kbay, and O ¨. Ulusoy, Automatic Detection of Salient Objects and Spatial Relations in Videos for a Video Database System, Image Vision Computing, vol. 26, no. 10, pp. 1384-1396, 2008.

  4. Oomoto E. and K. Tanaka, OVID: Design and Implementation of a Video-Object Database System, IEEE Transactions on Knowledge and Data Engineering, Vol. 5, No. 4, pp. 629-643, August 1993.

  5. Hjelsvold R. and R.Midtstraum, Modelling and Querying Video Data, Proceedings of the 20th International Conference on Very Large Databases, pp. 686-694, 1994.

  6. Y. Yildirim, T. Yilmaz, and A. Yazici, Ontology-Supported Object and Event Extraction with a Genetic Algorithms Approach for Object Classification, Proc. Sixth ACM Intl Conf. Image and Video Retrieval (CIVR 07), pp. 202-209, 2007.

  7. D. Song, H.T. Liu, M. Cho, H. Kim, and P. Kim, Domain Knowledge Ontology Building for Semantic Video Event Description, Proc. Intl Conf. Image and Video Retrieval (CIVR), pp. 267-275, 2005.

  8. R. Nevatia and P. Natarajan, EDF: A Framework for Semantic Annotation of Video, Proc. 10th IEEE Intl Conf. Computer Vision Workshops (ICCVW 05), p. 1876, 2005.

  9. T. Yilmaz, Object Extraction from Images/Videos Using a Genetic Algorithm Based Approach, masters thesis, Computer Eng. Dept., METU, Turkey, 2008.

  10. I. Horrocks, P.F. Patel-Schneider, H. Boley, S. Tabet, B. Grosof, and M. Dean, Swrl: A Semantic Web Rule Language, technical report,W3C,http://www.w3.org/Submission/S WRL/, 2004.

  11. Sancho C Sebastine Bhavani Thuraisingham Balakrishnan Prabhakaran Semantic Web for Content Based Video Retrieval

Leave a Reply