Click here to Skip to main content
15,884,872 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
I have a logstash config file below. Elastic is reading my data as a b where as i want it to read it as ab i found i need to use not_analyzed for my sscat filed and max_shingle_size , min_shingle_size for products field to get the best result.

Should I use not_analyzed for products field as well? Will that give better result?

How should I fill my my_id_analyzer to actually use the analyzer on different fields?

How should I connect the template with logstash config file?

What I have tried:

input{
    file{
    path => "path"
    start_position =>"beginning"

    }
}
filter{
    csv{
    separator => ","
    columns => ["Index", "Category", "Scat", "Sscat", "Products", "Measure", "Price", "Description", "Gst"]
    }
    mutate{convert => ["Index", "float"] }
    mutate{convert => ["Price", "float"] }
    mutate{convert => ["Gst", "float"] }

}
output{
    elasticsearch{
        hosts => "host"
        user => "elastic"
        password => "pass"
        index => "masterdb"
        }

}

Quote:
I also have a template that can do it for all the future files that i upload

curl user:pass host:"host" /_template/logstash-id -XPUT -d '{
    "template": "logstash-*",
    "settings" : {
        "analysis": {
            "analyzer": {
                "my_id_analyzer"{

                }
                }
            }
        }
    },
    "mappings": {
             "properties" : {
                "id" : { "type" : "string", "analyzer" : "my_id_analyzer" }
            }
        }

    }'
Posted
Updated 29-Aug-18 23:45pm
v2

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900