?

Log in

Object detection with OpenCV 3

ObjectDetection module of OpenCV based on the concept of cascade classifiers. There two types of classifiers currently in use: Haar cascades and LBP (Local Binary Patterns). Both of them may be used in Android enviroment.
Traing cascades technique is described here, here and here. (SVM traing is bit different)
Thanks to factory method FastFeatureDetector::create, configuring FastFeatureDetector in native C++ code is straightforward:

Ptr fastFeatureDetector = FastFeatureDetector::create(10, true, FastFeatureDetector::TYPE_9_16);
On other hand, Android Java wrapper for OpenCV has no explicit notion of any specific detector: in order to create the particular detector, one uses very generic factory method  FeatureDetector.create(), like

FeatureDetector detector = FeatureDetector.create(FeatureDetector.FAST);

We have a generic detector, one of the large family in which any member has several methods in common (detect() etc.), but we want an specific detector, our FAST, configured with its own parameters. And surely, in other cases, we will need other detectors with the another configuration. For this, the enginners of OpenCV came with strange solution: YAML files.



Generally speaking, YAML is much similar to old INI files. I'm wander to find these in the modern software (as far as OpenCV may be considered as morern), but YAML is very intuitive. The properties of the configured class becomes to lines of the YAML file. Say, our FastFeatureDetector needs to have the threshold of 10. The corresponding property just called 'threshold' and so will be the line in YAML:

%YAML:1.0
threshold: 10
nonmaxSuppression: 1
type : 1

Other properties are the same andhe whole configuration procedure will be as following:
1. Somewhere long before the actual creation of FastFeatureDetector:

String fastConfig = "%YAML:1.0\nthreshold: 10\nnonmaxSuppression: 1\ntype : 1\n";
writeToFile(mDetectorConfigFile, fastConfig);

(Pay attention to numerical value of type as defined in features2d.hpp




enum
{
TYPE_5_8 = 0, TYPE_7_12 = 1, TYPE_9_16 = 2,
THRESHOLD = 10000, NONMAX_SUPPRESSION=10001, FAST_N=10002,
}; )
2. Within onCameraFrame() :
FeatureDetector mDetector = FeatureDetector.create(FeatureDetector.FAST);
mDetector.read(mDetectorConfigFile.getPath());
mDetector.detect(mRgba, keyPoints);

Aside the reading cost, now we have Java equivalent to C++

C++ Java
flip(mGr, mGr, 1);

vector keypoints;

Ptr<FastFeatureDetector> fastFeatureDetector = FastFeatureDetector::create(10, true, FastFeatureDetector::TYPE_7_12);
if( fastFeatureDetector.empty())
return;
fastFeatureDetector->detect(mGr, keypoints);

drawKeypoints(mGr, keypoints, mRgb);
Core.flip(mGray, mGray, 1);
MatOfKeyPoint keyPoints = new MatOfKeyPoint();

FeatureDetector detector = FeatureDetector.create(FeatureDetector.FAST);
if( detector.empty() )
return mRgba;
    detector.read(mDetectorConfigFile.getPath());
detector.detect(mRgba, keyPoints);

Features2d.drawKeypoints(mGray, keyPoints, mRgba);

Now we going to compare the performance of both.
The first device to measure is Samsung Note II that equipped with ARMv7-based quad-core 1.6 Ghz Exynos (Cortex-A9) with Mali-400 MP GPU.
The mentioned Java code is executed at this device with 40 ms.
The corresponding C++ code is executed there for 20 ms.
Twice performance gain in favor of C++.



Tags:

Some Gradle clarifications

Shotcuts:

  • Any task defines doLast() and doFirst() public methods. The << operator is simply alias for doLast.

task transform {
              doLast {
                println "Executing $it task"
              }
          }

          is equals to

          task transform << {
           println "Executing $it task"
          }

          N.B. whenever it used, $it refers to the current instance of Gradle entity

  • gradle.taskGraph.whenReady is executed after configuration stage is completed for per-project basis. For example:

gradle.taskGraph.whenReady { taskGraph ->
        if( taskGraph.hasTask(loadFile) ) {
              version = '1.0'
        } else {
              version = '1.0-SNAPSHOT'
}
    println project
    println version
}

        N.B. project is pre-defined variable. Usually it stands for the current directory of the script

  • task myCopy(type: Copy) means that task myCopy 'inherits' from Gradle-supplied task Copy

Task configuration

Execution stage of Gradle always follows the configuration stage. When configuration stage mostlyperformed silently, the execution stage is invoked by cmd gradle <-quiet> <task>
At configuration stage it is possible to configure various properties of the tasks. For example, consider the following tasks:
   task transform(type: Copy, description: 'Transformation') << {
        println "Executing $it task"
        ext.srcFile = 'default.html'
   }

  task buildMe << {
    println "accessing " + tasks.getByName('transform')
    transform.from 'resources'
    transform.into 'target'
  }

  N.B. tasks.getByName('transform') my be simpified to prinln "accessing " + transform
  N.B.2 transform.from 'resources' assumes the base task - Copy - has an API (not just property!) from. No = is used by Groovy to assign the value in this case.
 N.B.3 two operators
transform.from 'resources'
transform.into 'target'

  may be expressed as a single compound one:

  transform {
      from 'resources'
      into 'target'
}

Access to the system


def opencvhome = System.getenv('OPENCV_HOME')
println opencvhome

Gradle Wrapper

Gradle Wrapper serves for Gradle integratation in VCS. Especially this used with Continuous Integration servers like Jenkins or Travis. The Wrapper is actually an jar (gradle-wrapper.jar) that can be created by invoking gradle wrapper with a version specified. Being stored in VCS, the wrapper persists the used version and furthermore, if this version is not present on the system, it is able to download it. Changing the used version is easy because Gradle Wrapper reads its configuration from associated gradle-wrapper.properties file.  For example, the following configuration may be used to exploit the Gradle 2.5:

distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-2.5-all.zip

Plugins
Plugins are just sets of the tasks. For AS, the most usefull pluig is 'android' or recently, with advent of AS 1.3, the experimental 'com.android.model.application'. Each plugin defines its own API, for example, the experimental plugin defines the following structure:

apply plugin: 'com.android.model.application'
-model
    - compileOptions.with
        - sourceCompatibility
        - targetCompatibility
    - android
        - defaultConfig.with
    - android.ndk
    - android.buildTypes
    - android.productFlavors
    - android.sources
        - main
          - jni
          - java
            - source
          - jniLibs
- dependencies
Let's see how this configuration applies to AS. With "normal" settings pointing java source to 'src' it looks like



When java source changes intentionally to something different - srcXXX, AS tries to find the source at this 'wrong' location and re-structure the project:



android.sources should be mentioned with double care. As for old, plain 'android' plugin, Google introduced a new folder 'jniLibs' to the source sets. This means that you can add the prebuilt .so files to this folder, and the plugin will take care of packaging those native libraries inside APK.

Sample
import org.apache.tools.ant.taskdefs.condition.Os

apply plugin: 'com.android.model.application'

model {

    compileOptions.with {
        sourceCompatibility = JavaVersion.VERSION_1_7
        targetCompatibility = JavaVersion.VERSION_1_7
    }

    android {
        compileSdkVersion = 22
        buildToolsVersion = "22.0.1"

        defaultConfig.with {
            applicationId = "com.example.oleg.myapplication"
            minSdkVersion.apiLevel = 18
            targetSdkVersion.apiLevel = 22
            versionCode = 1
            versionName = "1.0"
            multiDexEnabled = true
            testInstrumentationRunner = "android.support.test.runner.AndroidJUnitRunner"
        }

    }

    android.ndk {
        moduleName = "fastcvUtils"

        //abiFilters += ["armeabi", "armeabi-v7a", "x86"]
        abiFilters += "armeabi-v7a"

        // Link with static OpenCV libraries: the order is important!
        // The following order taken from OpenCV.mk
        def opencv_modules = ["shape", "stitching", "objdetect", "superres", "videostab", "calib3d", "features2d", "highgui", "videoio", "imgcodecs", "video", "photo", "ml", "imgproc", "flann", "core", "hal"]

        ldLibs += ["log","stdc++","dl", "z"]

        abiFilters.each { targetAbi ->
            println targetAbi

            opencv_modules.each{ module ->
                ldLibs += file(projectDir).absolutePath + "/src/main/jniLibs/" + targetAbi + "/libopencv_" + module + ".a"
            }
        }

        cppFlags   += "-Werror"
        cppFlags   += "--debug"
        cppFlags   += "-frtti"
        cppFlags   += "-fexceptions"
        cppFlags += "-I/Users/Oleg/Downloads/OpenCV-android-sdk/sdk/native/jni/include"
        //cppFlags    += "-std=c++11" // for lambda-expressions

        ldLibs += file(projectDir).absolutePath+"/src/main/jniLibs/armeabi-v7a/libtbb.a"

        stl = "gnustl_static"

    }

    android.buildTypes {
        release {
            isMinifyEnabled = false
            proguardFiles += file('proguard-android.pro')
            isJniDebuggable = true
            isDebuggable = true
        }

        debug {
            isJniDebuggable = true
            isDebuggable = true
        }
    }

    android.productFlavors {

        create("arm") {
            ndk.with {

                abiFilters += "armeabi"
                println "*** building for arm"
            }

        }

        create("armv7") {
            ndk.with {

                abiFilters += "armeabi-v7a"
                println "*** building for armv7"
            }

        }

        create("x86") {
            ndk.with {
                abiFilters += "x86"
                println "*** building for x86"
            }
        }
    }

    components.android {
        binaries.afterEach { binary ->
            println binary.name
            binary.mergedNdkConfig.cppFlags.add("-std=c++11")
        }
    }

}

dependencies {
    compile fileTree(include: ['*.jar'], dir: 'libs')
    compile 'com.android.support:appcompat-v7:22.2.1'
    compile project(':opencv')
}


Tags:

Setup native OpenCV for Android

Мы рассмотрим здесь только установку OpenCV для нативного прораммирования в C++, но прежде одно замечание: инженеры OpenCV подготовили почти полный port библиотеки на Java, куда входят не только интерфейсы, необходимые для взаимодействия с Android, но и полный wrapper всех классов и методов библиотеки. Таким образом, вы можете программировать OpenCV, оставаясь в Java, без единой строчки на C++. Основная причина, из-за которой вам может понадобиться нативное программирование - быстродействие. Я не имею точных цифр, но если речь идет об обработке в режиме реального времени, то не стоит их и искать: любой VM будет медленнее, чем нативный код. (Обычно Java-код в 2-3 раза медленнее чем тот же код на C++). Еще одна причина, по которой стоит выбрать нативный код состоит в переносимости. Ну, не писать же тоже самое под iOS сначала?
С другой стороны, обработка уже сделанных изображений, например, не предъявляет особых требований к быстродействию и может быть выполнена значительно проще в Java, без нативного кода.
Но если наш путь ведет в real time (через Android Stuodio и Gradle), можно начинать.
Read more...Collapse )

BLE

The main competitor of BLE comes from IEEE rivaling. Roughly speaking, classic Bluetooth compares to IEEE 802.12 (Wi-Fi and especially Wi-Fi Direct) standards and BLE, as native successor of classic base, compete with IEEE 802.15 (ZigBee powered with 6LoWPAN). Both players – BLE and ZigBee – fit into relatively new market segment: the low-power communication with weak demands on traffic volume, but emphasize long battery life and various network topologies. Obviously, the final cost of the standards’ implementation and its simplicity for industrial solutions is critical factor as well. Today for around $2, you can buy BLE system-on-chip (2.4 GHz radio-plus-microcontroller) solutions per chip and in low volumes, which is well below the total overall price point of similar devices for ZigBee.
From radio perspective, been low-powered simply means to sleep between short activity sessions and consequently being able to wake up as fast as possible. The implications of such sleeping to the transmission rate are very sensitive, but in the field of IoT and other low traffic domains even the rate of 5-10 Kb/s might be enough. The long battery life is more important. Official Bluetooth Core 4.1 specification declares the transmission rate for BLE device for 25 Mbps, but actually it may be may be significantly low. In reality, 10Kb/s may be the production mark for many device. Although this rate might seem slow and counterproductive, it does highlight the primary design goal of Bluetooth Low Energy: low energy. Even transmitting at these relatively modest data rates, 10 KB/s will quickly drain any small coin cell battery. The easiest way to avoid consuming precious battery power is to turn the radio off as often as possible and for as long as possible, and this is archived by using short burst of packets (during a connection event) at certain frequency. In between those, the radio is simply powered off. For example, Nordic nRF52833 SoC cay transmit up to six data packages per connection interval while each outgoing data packet can contain up to 20 bytes of user data. Now assuming the shortest connection interval of 7.5 ms, this provides a maximum 133 connection events per second and 120 bytes per connection event (6 packets * 20 user bytes per packet). Continuously transmitting at this rate gives
133 connections per second * 120 bytes = 15960 bytes/s ~ 0.12 Mbs (~12Kb/s).
TI SendorTag is advertising by 100 ms. interval.
The connection interval (usually between 7.5 – 4000 MS) is dictated by Central device, but no Android, nor do iOS applications have no control on connection parameters.
So, in practice, a typical best-case scenario should probably assume a potential maximum data throughput in the neighborhood of 5-10 Kb/s, depending on the limitation of both peers.
This gives an idea of what you can and can’t do with BLE in terms of pushing data out to the device and why other technologies such as WiFi and classic Bluetooth have their place in the world.

2. A BLE device can communicate with outside world in two ways: broadcasting and connections.  Both are based on GAP (Generic Access Profile).
Broadcasting
Broadcasting essentially allowed for sending data out one-way to anyone that is capable of picking up the transmitted data. It is important to understand, because there is the only way for a device to transmit data to more than one peer at time.
The standard advertising packet contains 31-byte payload used to include data that describes the broadcaster and its capabilities, but it can also include any custom information you want to broadcast to other devices. If this standard payload isn’t large enough to fit all of the required data, BLE also supports an optional secondary advertising payload (called the Scan Response), which allows devices that detect a broadcasting device to request a second advertising frame with another 31-byte payload, for up to 62 bytes total. The very popular example of broadcasting is iBeacon (the format defined by Apple) or AltBeacon (the open format, aka Alternative beacon).
A major limitation of broadcasting, when compared to regular connection, is that there are no security or privacy provisions at all with it: any observer device is able to receive the data being broadcasted), so it might not be suited for sensitive data.
Connections
The widely known terms Central and Peripheral (slave) roles are applied only to BLE connection mode.
When in Central role, the device scans periodically the preset frequencies for connectable advertising packet and, when suitable, initiates the connection. Once the connection established, the central manages the timing and initiates the periodical data exchanges.
The device in Peripheral role send connectable packets periodically and accepts incoming connections. Once in active connection, the peripheral follows the central’s timing and data its data exchange regulations.
The most important mark of a BLE version 4.0 connection mode is that once the connection is established, the peripheral stops advertising and thus, becomes invisible to other device. On other hand, the two connected devices can begin a secure data exchange between them in both directions.
Starting from BLE version 4.1 it is possible, although, to connect one central to more than one peripheral device. (In addition, even a peripheral can be connected to multiple centrals).
Obviously, connections have an advantage of organizing the data with much finer-grained control over each field or property through the use of additional protocol layers and, more specially, the Generic Attribute Profile (GATT). Data is organized around units called services and characteristics, each characteristic with its own level of access rights and descriptive metadata.

Profiles.
To make the interoperability between various BLE devices, the profiles play significant role by organizing the use of protocols (which deal with formats, routing, encoding etc.) in order to achieve a particular goal. In this extent, they similar to 802.11 instructions for Wi-Fi.
Two profiles are very important: GAP and GATT
Generic Access Profile (GAP) defines low-level interactions with devices, roles (Central, Peripheral etc)
Generic Attribute Profile (GATT) establishes in details how to exchange all profile and user data over BLE connection, including roles (Client, Server).
It is worth mentioning that GATT roles are both completely independent of GAP roles and also compatible with each other. That means that both a GAP central and a GAP peripheral can act as a GATT client or server, or even act as both at the same time.
These profiles are often used as a foundation for APIs as the entry point for the application to interact with protocol stack.
Use-case-profiles are limited to GATT-based profiles. However, the introduction of L2CAP connected-oriented channels in BLE specification 4.1 might mean that GATT-less profiles can might start to appear in near future.
For a meanwhile, Bluetooth SIG defined the set of use-case profiles, based on GATT, covering a wide range of use case. The full list is here.

As was predicted (“Getting Started with Bluetooth Low Energy” by Akiba; Robert Dacidson, Carle Cufi and Kevin Townsend. Chapter 5. Hardware Platforms.) “One big weakness of CC2541 is the relatively dated 8051 core driving its SoC, which not only requires an expensive commercial compiler and IDE to use (IAR Embedded Workbench), but also feels quite long in the tooth compared to more modern ARM Cortex-M cores making their way into many other SoC. This will likely change in the near future, as SoC vendors feel greater pressure to move to newer cores, and IT is no doubt well aware of the trend.”
The TI’s answer was amazing: not only the modified version of SensorTag – the SimpleLink - based on modern ARM Cotex-M3 core, nor it is include more sensors like light and microphone – with a help of AOD it may be loaded not only for BLE but also for 6LoWPAN and ZigBee.
Software
Aside well-known Android and iOS frameworks for BLE, its worth mention the intriguing JavaScript approach and new (but a lame bit) framework for WP 8.1.
 
From security's point of view, Android detects 3 different types of Wi-Fi networks: Open, WEP, WPA(2)-PSK and WPS for Wi-Fi Direct. First and second types are barely used when security is really matters on the traffic. Practically, the only difference between them is the cost of the effort required to hack such network. The first case costs the inserting a spoofing device (like Linux notebook equipped by aircrack-ng or airmon-ng) into the proximity of the network that usually varies from 10 to 50 meters and depends on the antenna of Linux as well. Linux box may be turned into the spoofer by a simple procedure when it accepts all net packages and instead of throwing these of them which not address itself,  keeping them for further analysis.
The second case is also easy for hacking. Its weakness primarily stem from the short lenght of the group cipher keys (40 or 104 bits) and from that fact that actually WEP transmits several bytes of the calculated temporary key within each data packet. Literally, it's hackable by design.
The last method - WPA(2) - is derived from WEP's errors. Afrer the hanshaking procedure, it supports several pairwise cipher algorithms: CCMP and TKIP. Due to the knoun attack to the message intergrity check algorithm called Michael that used in TKIP, actually WPA used only with CCMP method.
At handshaking stage, WPA(2) supports two authentication modes: PSK (Pre Shared Key) and Enterprise. PSK assumes that any client knows the password/passphrase for network access. This is very handy for home Wi-Fi networks, but may be unacceptable for enterprises where such common password should be changed frequently (e.g. on every fired employee).
No one of the these three mentioned security types is useful for Wi-Fi Direct simply because it may be very annoying to type the password (6-8 non-trivial letters) on printers, game consoles and TVs. WPS comes to this area and allowed for enabling the network access by pressing an accept key on such devices. For Wi-Fi Direct, WPS is a only one supported access method.
Back to regular W-Fi, the following table summarizes the WiFiConfiguration options that should be used to establish AP and to connect to the established one.

WPA PSK(2)

WEP

Open

allowedAuthAlgorithms

OPEN,SHARED

clear()

allowedProtocols

RSN, WPA

RSN,WPA

RSN,WPA

allowedKeyManagement

WPA_PSK

NONE

NONE

allowedPairwiseCiphers

CCMP, TKIP

CCMP,TKIP

CCMP,TKIP

allowedGroupCiphers

WEP40, WEP104, CCMP,TKIP

WEP40,WEP104

WEP40, WEP104,CCMP,TKIP

preSharedKey

"\"" + password + "\""

"\"" + password + "\""


Implementaion of these settngs may looks like:


static public WifiConfiguration createConfigForWPAAccess(String ssid,
                                                         String password){
    WifiConfiguration wifiConf = new WifiConfiguration();
    wifiConf.SSID = ssid;
    // Protocols
    wifiConf.allowedProtocols.set(WifiConfiguration.Protocol.RSN);
    wifiConf.allowedProtocols.set(WifiConfiguration.Protocol.WPA);
    // Key Management
    wifiConf.allowedKeyManagement.set(WifiConfiguration.KeyMgmt.WPA_PSK);
    // Pairwise Ciphers
    wifiConf.allowedPairwiseCiphers.set(WifiConfiguration.PairwiseCipher.CCMP);
    wifiConf.allowedPairwiseCiphers.set(WifiConfiguration.PairwiseCipher.TKIP);
    // Group Ciphers
    wifiConf.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.CCMP);
    wifiConf.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.WEP40);
    wifiConf.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.WEP104);
    wifiConf.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.TKIP);
    wifiConf.preSharedKey = "\"".concat(password).concat("\n");

    return wifiConf;
}

Tags:

Preface
The procedures described here are for peers discovery over Wi-Fi Direct network. Although they may be used for auxiliary purposes, it's almost impractical to apply them to real apps. Finding a peer is only demonstrates its ability to communicate, but without knowing the capabilities of the peer, such a communication will be like the conversation without a subject. To discover the services advertised by peers and to talk to them on particular subject, use Service Discovery (mDNS and DNS-SD).

Background
Wi-Fi Direct became very popular recently because
a) it has a minimal and relatively low-cost hardware requirements, namely only use of 802.11g and
b) it may be implemented entirely in software over traditional Wi-Fi radio waves.
In contrast with traditional Wi-Fi networks, where clients discovered and associated with pre-defined AP (Access Point), a major novelty of Wi-Fi Direct is that these roles are specified dynamically, and hence a Wi-Fi Direct device has to implement both the role of a client and the role of an AP (Soft-AP).
Wi-Fi addressing basics. Any Wi-Fi adapter, or more precisely, its antenna receives radio-waves in pre-defined ICM band (usually 2.4 HGz). Since the antenna can not reject the packages targeted to specific device, it's a task of the wireless nic to capture  only the packages which destination is its device. Actually, nic (Linux-driver) supports 6 operation modes.
In Wi-Fi Direct realm, group is equivalent to traditional Wi-Fi network. The device implementing AP-like functionality is referred to as group owner (GO).  When two devices discover each other they negotiate their roles to establish the group, however, one can claim its intents to be group owner via WiFiP2pConfig.groupOwnerIntent in the range 0-15.
Like traditional AP, a P2P GO announces itself through beacons and required to run a DHCP (Dynamic Host Configuration Protocol) server to provide P2P clients with IP addresses.
Although Android announced its support for WiFi P2P starting from ver. 4.0 (ICS), there are devices that only include the WiFi P2P API as a part of the operating system, but lack a appropriate package manager.

1. Basics
The foundation infrastructure for all Android Wi-Fi (include Wi-Fi Direct) is a bit tailored part of Linux Wireless.Read more...Collapse )
2. Device Discovery
Before any connection can be established, P2P devices have to find each other. For this they alternatively listen and send probe requests with additional P2P information elements on so-called social (and non-overlapping) ISM (Industrial, Scientific and Medical) channels, which are challels 1, 6 and 11 on 2.4GHz band.
(Rumors are that there are totally 13 channels on 2.4 GHz band, and 13-th is forbidden for all countries, exclude, for example, Bolivia). Some network monitors may switch their llisteting for channels, i.e perform the procedure called "channel hopping". P2P assumes the transmitting channel is selected once and not changes for entire life-circle of communication.

During discovery and negotiation P2P devices use their global MAC-address as Device ID.
From Android perspective, this is a very standard procedure. It's a well documented and illustrated by WiFiDemo sample from Hardik Trivedi. Generally, you use various methods of WiFiP2pManager to initiate operations on wpa_supplicant and receives system intents in specially provided Broadcast Receiver.
Read more...Collapse )

3. Connection is established
Read more...Collapse )

4. Using Sockets
Immediately after the group owner's address becomes known to the client (usually in onConnectionInfoAvailable() ), nothing can stop it to open socket on this address. The server side does the opposite procedure.Read more...Collapse )

5. Notes
Latest Android versions support the concept of "Persistent Group" or as mentioned in WiFi Direct Setting "Remembered Group". The idea behind this concept is to allow reconnection to this group without user intervention. It's really good for sequentially connected devices like home equipments, but may cause the problems with devices that assumed to estabslish new connection frequently.
The concept was described in IEEE 802.11 standard and may be briefly described as following.
During the group formation process, P2P devices can declare a group as persistent by using a flag in the Beacon frames, Probe Responses an GO negotiation frames. In this way, the device forming the group store network credentials and the assigned P2P GO and client roles for subsequent re-installation of the P2P group. Specially, after the Discovery phase, if a P2P device recognizes to have formed a persistent group with the corresponding peer in the past, any of two P2P devices can use only Invitation Procedure (a two-way handshake) to quickly re-instatiate the group.
Unfortunatelly, WiFiP2pManager.connect() method does not send the proper invitation to the peer if the corresponding persistent group is exists. May be this is due to the fact that such support was only introduced in Android 5 and lacks in previous P2P implementations.
Anyway, the documented API does not provide methods for deleting the persistent groups. In a case the deletion is really needed, one can use reflection in this manner:

Method[] methods = WifiP2pManager.class.getMethods();
for (int i = 0; i < methods.length; i++) {
if (methods[i].getName().equals("deletePersistentGroup")) {
// Delete any persistent group
for (int netid = 0; netid < 32; netid++) {
methods[i].invoke(mManager, mChannel, netid, null);
}
}
}

Must reading:
0. Absolutely must: Zero Configuration Networking: The Definitive Guide. By Cheshire S. and Steinberg D.H.
1. Device to device communications with WiFi Direct: overview and experimentations.
2. Why Wi-Fi Direct can not replace Ad-hoc mode
3. Usng WiFi P2P for Service Discovery (Google official documentaiton)
4. Wi-Fi Direct. White Paper.
5. P2Feed site.
We got back to Activity Launch Modes because of the way of initialization of WAMS notifications. The WAMS NofificationHandler requires Context in its registration handlers and we have a little choice, but provide the some (usually root main) activity to it as the desired context. The problem with approach is re-creating the activity that started with ‘standard’ launch mode. Because this mode used by default, the activity for which the launch mode is not declared explicitly, will be launched under its conditions. The conditions of ‘standard’ launch mode imply the system to always create a new instance of the activity in the target stack and (probably) destroy the previously launched instance. This, in turn, will cause the NotificatinHandler be unregistered (on destroy) and registered again (on create). Obviously, this is not the desired behavior.
In order to prevent the repeatable registration and un-registration events, our main activity will be rather declared to start with ‘singleTop’ launch mode.
Under this mode, if an instance of the activity exists at the top of the target task, the system routes the intent to that instance through a call to its onNewIntent() method, no new instance of such activity is created.
This is much better for NotificationHandler. Under these conditions, it will be registered on create and continue living surviving child activities launches and closes. The only way to un-register the NofificationHandler inside ‘singleTop’ activity will be  to actually destroy it. But indeed, such destructions is a logical end of the handler. For entire life of the application (as long as its root activity remains existing on the task stack) the handler is alive!

‘Standard’ Launch Mode
With 'standard' (default), launch mode when the execution passed to the child activity, main activity is stopped (onStop) and not removed from the stack.
When navigated Up from the child back to such activity, it is destroyed (onDestroy) and created again (onCreate).
To enable Up navigation, the child activity should specify its parent activity (preferable in manifest file) and call get[Support]ActionBar().setDisplayHomeAsUpEnabled(true);  from within its onCreate() handler.
So long as parent activity is specified for the child, pressing on the left-facing caret is handled automatically as an Up navigation. ‘Automatically’ means that Activity class’ handler executes the code similar to the following:
Obviously, no one can prevent you to include this code to the onOptionsItemSelected() handled and thus avoid call to super.onOptionsItemSelected(item).  Anyway, this click causes the child activity to receive onOptionsItemSelected() call as usual event from ActionBar. The ID for this action is android.R.id.home.
Thus, 'standard' launch mode causes to system always to create a new instance of the activity in the target stack.
There are may be several instances of such activity on the stack.
‘singleTop’ Launch Mode
Under 'singleTop' launch mode the new instance of the activity is not created when navigating up from the child, unless this activity is not on the top the task stack. When it is on the top, the new intents are delivered to such activity thru onNewIntent(), then the activity gets started (onStart).

Tags:

ML (Azure) terminology & basics.

Начнем с определения feature (признака) объекта. Собственно, признак это просто некоторая функция объекта, как например, R(p) - красная составляющая цвета объекта пиксель, рост и возраст - признаки человека, участвующего в исследовании, размер есть признак стихотворения и т.д. Как правило, собирают несколько признаков и оформляют их в матрицу признаков. Собранные признаки, которые дополнены значениями для каждого объекта, называются обучающей выборкой (datasets в MAML). Например, имя автора может быть значением в выборке из размера стихотворения, вида рифмы и жанра, характеристика того, является ли пиксель "переломной точкой" может быть значением в выборке из трех составляющих его цвета и т.д. Однако принадлежность к определенному классу часто не есть единственная характеристика объекта. Часто рассматривают выборки, где характериской объекта является некоторая вещественная величина. В этом случае говорят о задаче регрессии. В случае, если характеристикой объекта является конечная (возможно большая) величина, говорят о задаче ранжирования. Вообще, различает 4 основные задачи машинного обучения:
Следующие задачи относят к классу обучения с учителем (supervised learning):

  • classification

  • regression

К классу обучения без учителя (unsupervised learning) относят:

  • clustering (ranking?)

  • anomaly detection

MAML_tasks
Задача кластеризации (clustering) состоит в нахождении похожих групп на тестовой выборке (training data) с последующим делением выборки на подгруппы по найденному принципу сходства. Amazon.com, например, использует алгоритмы этой категории для разбиения пользователей на группы по социальной информации. Каждый новый пользователь этого сайта заносится в группу, в которой зарегестрированы пользователи, похожие на него по социальным параметрам. В дальнейшем это используется для выдачи рекомендаций. MAML использует для кластеризации алгоритм K-средних.

Классическим примером задачи классификации является задача цветков ириса (Фишер, 1936). Её решение строится на гипотезе компактности, которая состоит в том, предпологается, что похожие объекты, как правило, принадлежат одному и тому же классу. Для случая ирисов тем самым предпологается, что эта задача решаема, т.е. самой природой созданы разные виды этих цветов, которые можно отделить друг от друга. Другое дело, что с точки зрения теории машинного обучения признаки, по которым разделены виды, уже отобраны для нас биологом. Формально мы не можем знать, какой именно признак являеся разделяющим: в экпсперименте Фишера даны значения длины и ширины лепестка (petal) и чашелистика (sepal), но с таким же успехом можно было бы пытаться разделить виды по, например, длине стебля или окраске цветка. Выявление разделяющих признаков само по себе является задачей, которую необходимо решить еще до начала работы процесса классификации. В данном случае Фишер просто воспользовался услугами ботаника (Э. Андерсона), который предварительно уже назвал именно длину и ширину лепестка и чашелистика в качестве разделяющих признаков. В любом случае, сказав ему спасибо, мы займемся дальше только классификацией по уже отобранным разделяющим признакам.
Мы применим метод k-средних для задачи кластеризации ирисов Фишера, заметив предварительно, что сам метод был предложен галицким математиком, который в свое время был научным руководитем Банаха, - Гуго Штейгаузом. Обучающую выборку можно найти в известном репозитории UCI и, конечно, она доступна среди готовых наборов MAML Studio. Хотя оригинальная выборка Фишера дается для  3 классов цветков и на 150 объектах, MAML предоставляет выборку только для 2-х классов и 100 объектов. В формате ARFF ее начало выглядит так:
@RELATION Unnamed
@ATTRIBUTE Class NUMERIC
@ATTRIBUTE sepal-length NUMERIC
@ATTRIBUTE sepal-width NUMERIC
@ATTRIBUTE petal-length NUMERIC
@ATTRIBUTE petal-width NUMERIC
@DATA
1,6.3,2.9,5.6,1.8
0,4.8,3.4,1.6,0.2
1,7.2,3.2,6,1.8

До начала работы, алгоритм k-средних каким-либо образом выбирает k центров кластеров (centroids) среди объектов выборки наблюдений. Способ выбора этих центров имеет очень важное значение для эффективной работы алгоритма, и существует несколько методов их выбора, например k-means++, k-means++ fast и проч.
Когда центры кластеров выбраны, работа алгоритма состоит из двух чередующихся шагов.
Assignment step: применить диаграмму Вороного для имеющихся центров кластеров, т.е. разбить выборку наблюдений на k непересекающихся кластеров так, что расстояние от центра кластера до всех его точек будет меньше расстояния от любой из них до центра другого кластера.
Update step: вычислить новые центры кластеров как центры их масс, пытаясь минимизировать расстояние от каждого центра до всех точек его кластера.
Действие алгоритма прекращается, если после второго шага центры кластеров не изменились.

Построим MAML эксперимент с ирисами Фишера для 2-к кластеров:

Iris_1

Если модуль K-Means Clustering включен в эксперимент, он возвращает кластеризационную модель, которая может служить входным параметром для Train Clustering Model.
После запуска эксперимента визуализация полученного результата дает такое расположение кластеров:
Iris_Clusters_Results
А вот пример решения подобных задач на основе линейного классификатора с параметром:
MAML_linear_classificator
Классификаторы интерпретируются очень просто: +1 на выходе понимается как принадлежность к классу, -1 как не принадлежность. Интерпретация операторов регрессии тривиальна.

Use NDK in Android Studio, part 3

This time we'll talk about compiling and linking our C++ code with third-party libraries: .a or .so files. Just in case you forgot, .a files called "staic libraries" in Linux realm and .so files (shared objects) are called shared libraies. The rule of thumb in Android world is that "Only shared libraries will be installed/copied into your application package. Static libraries can be used to generate shared libraries though". And because you can define more that one module in each Android.mk file, its pretty possible to build shared library from the static one. For example:

LOCAL_PATH:= $(call my-dir)
# The purpose of this part of make is to create wrapper around libfastcv.a library
# Build module will be also called 'libfastcv'
LOCAL_MODULE    := libfastcv
LOCAL_SRC_FILES := ../../libs/libfastcv.a
include $(PREBUILT_STATIC_LIBRARY)
....
LOCAL_MODULE    := libfastcvUtils
LOCAL_CFLAGS    := -Werror
LOCAL_C_INCLUDES += $(JNI_DIR)
LOCAL_SRC_FILES := \
FastCVUtil.cpp
LOCAL_LDLIBS := -llog
LOCAL_SHARED_LIBRARIES := liblog libGLESv2
LOCAL_STATIC_LIBRARIES := libfastcv
include $(BUILD_SHARED_LIBRARY)

and vice versa: from prebuilt shared library to shared library:
LOCAL_PATH:= $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := fastcvUtils
LOCAL_SRC_FILES := $(LIB_PATH)/libfastcvUtils.so
include $(PREBUILT_SHARED_LIBRARY)

LOCAL_MODULE    := libfastcvFeatDetect
LOCAL_CFLAGS    := -Werror
LOCAL_SRC_FILES := Corner.cpp
LOCAL_STATIC_LIBRARIES := libfastcv
LOCAL_SHARED_LIBRARIES := liblog libGLESv2 fastcvUtils
LOCAL_C_INCLUDES += $(JNI_DIR)/fastcv
LOCAL_C_INCLUDES += $(JNI_DIR)
include $(BUILD_SHARED_LIBRARY)